This site uses cookies. By visiting this site you agree that this site stores cookies on your computer. more info

Home


Home

Info


About Disclaimer News

Downloads


ADO Query Executer Movie Browser Tic, Tac, Toe Flowchart Programmer

Frequently asked questions


ADO Query Executer Movie Browser Tic, Tac, Toe Flowchart Programmer

Development corner


Certifications

Sponsored links


FAQ - ADO Query Executer

What is ADO Query Executer?

ADO Query Executer is an Win32 application that executes SQL queries against all OLEDB and ODBC compatible databases (through MS ADO). It also provides a way to export data returned by queries to a CSV or some other text files. One of key features of ADOQE is simple running, since no instalation is needed and consists of only one EXE file. This means it is build on principle of "Download&Run". The other key feature is that it supports exporting a large datasets (tables) to a CSV (or many CSV) file(s), without putting too much load on database or memory.


Who can use ADO Query Executer?

ADOQE primarily targets database administrators or any other person who works with databases. Sometimes you have to quickly execute some queries (either to INSERT, UPDATE, DELETE or SELECT data) and you don't have any proper tools at your disposal. This is where ADOQE comes in. All you need is internet access to download ADOQE and you're ready to go.


Does ADOQE consume much memory when exporting large datasets?

When exporting large datasets, you can form your SQL statement so it will not consume too much memory (RAM). For example, if exporting a table with 1.000.000+ rows from SQL Server, all you need to to is limit the records returned from SQL Server with a TOP statement and use order by some ID field (usually an int) with also providing well formed WHERE statement, and ADOQE will execute this query as many times as needed (there are special export variables and commands described in inline help of ADOQE). For example, if you use TOP 5000 for SQL that selects records from a table table with 1.000.000 records, then ADOQE will execute this query 200 times and each time a new set of data will be written to the output file. Of course you can also limit how many rows can be written in a single export file (e.g. 10000), and when this limit is reached, ADOQE starts to write results into new file.


How to export a simple table with identity field with ADOQE?

This is example script with description of exporting data with ADOQE:
This is a script that exports data from a table PERSONAL_DATA (residing in PRS schema) with 5000 rows at each turn (note the variable defined &&TOPCOUNT. Other than TOPCOUNT variable we define that we want to save our table to "C:\Temp\PersonData.csv" and that ID column we use is nPersonID (that ID column means that its last value in current sql execution will be replacement for <<LastID>> variable placeholder the next time SQL executes).
&&DEFINE_GLOBAL: TOPCOUNT = 5000
&&SaveTo: C:\Temp\PersonData.csv
&&IDColumn: nPersonID
select top %TOPCOUNT% nPersonID, vcFirstName, vcLastName, vcAddress, vcCountry, dtPostDate
from PRS.PERSONAL_DATA
where (nPersonID > <<LastID>>)
order by nPersonID
We define that our SQL will be sorted by nPersonID and that we want only those records where nPersonID is greater than the last nPersonID read from dataset in prior SQL execution. To better understand this WHERE and ORDER BY relation, let's assume that we have 12.000 records inside this table (PERSONAL_DATA), which has nPersonID defined as identity (in SQL Server; pretty much as AutoNumber in Microsoft Access or any autonumber column in other databases; e.g. Oracle, DB2, etc.). Let's assume that these 12.000 records inside PERSONAL_DATA table has incremental ID's inside nPersonID ranging from 1 to 12.000. OK, now we have table with 12.000 rows all of which have different ID's from 1 to 12.000. What ADOQE does now, is he takes this SQL (it strips it of variables prefixed with &&) and goes to replace variable value. First it replaces %TOPCOUNT% with 5000 (since this is defined with variable TOPCOUNT), then it replaces "<<LastID>>" with ID of last SQL executed. Since there is no (not yet) last SQL executed, this variable gets value of 0.
So, ADOQE gets to execute this query:
select top 5000 nPersonID, vcFirstName, vcLastName, vcAddress, vcCountry, dtPostDate
from PRS.PERSONAL_DATA
where (nPersonID > 0)
order by nPersonID
This is how we get our first 5000 rows of data, which are immediately written to the output file (C:\Temp\PersonData.csv in our case). Now, what about other 7000 rows? Well, since ADOQE has read some records (5000 to be exact) it assumes that reading isn't over (to signal ADOQE that reading is over, it must receive an empty dataset as a result of SQL execution), so it executes SQL statement again, only with slightly different parameter replacing. Again, it takes an SQL (stripping it of variables prefixed with &&) and replaces variables. TOPCOUNT is replaced exactly like the first time; with value 5000. Now, the value "<<LastID>>" is replaced slightly different. We see that we have defined an &&IDColumn variable (with value of "nPersonID"), which tells ADOQE which column he must treat as ID column in order to replace "<<LastID>>" parameter placeholder with that column's value (with the column's value of the last row in dataset). So, in our case this means that the first 5000 records that were executed before, ADOQE remembers ID column's value of the last row in returned dataset (in this case nPersonID column) and uses it in next SQL execution as a replacement for "<<LastID>>" parameter placeholder. So, based on that information, ADOQE can now generate a new SQL that looks like this:
select top 5000 nPersonID, vcFirstName, vcLastName, vcAddress, vcCountry, dtPostDate
from PRS.PERSONAL_DATA
where (nPersonID > 5000)
order by nPersonID
So, you can see, ADOQE now generated slightly different SQL with replacing "<<LastID>>" parameter placeholder with value of last row's ID column's value.
So, now we still get 5000 records back (first time we got 5000 records and second time we get 5000 also, because we have a total of 12.000 records in a table, so this means 3 iterations of 5000-recordpack to browse or export all data). ADOQE writes these new 5000 records to the output file. The next time it executes, the logic is similar, only this time, "<<LastID>>" parameter placeholder is replaced with value of last row's ID column's value, which in this case is 10.000.
select top 5000 nPersonID, vcFirstName, vcLastName, vcAddress, vcCountry, dtPostDate
from PRS.PERSONAL_DATA
where (nPersonID > 10000)
order by nPersonID
Now, since we have only 12.000 records in a database with ID's from 1 to 12.000, this dataset returns only 2.000 records (since it must return only those that ID's are bigger and 10.000 and that is 2.000 rows). ADOQE retrieves this 2.000 rows and writes them to file. What it does next it executes SQL again with value for "<<LastID>>" parameter placeholder as 12.000. This time the dataset returned will be empty (since there is no record in a table with nPersonID greater than 12.000) and ADOQE will now that it has finished exporting table contents.

So, you can se, we can use this approach also on tables that are much more bigger. Maybe we can generate even a complex join of tables to prepare data to import into some other system. Who knows? The point is that ADOQE can export large dataset with smaller steps and it's possible to do it with plain SQL statements without need for DB Vendor's tools (e.g. bcp from Microsoft or EXPORT command from IBM DB2). It also produces very straightforward text files, where you can also specify delimiters to be used. The most common used delimiter is TAB (\t) since it is rarely used inside column's values, hence you can always count that if tab is found then it is a delimiter, not a part of a value. You can also limit number of rows written into one export file with specifal variable (e.g. &&DEFINE_GLOBAL: LinesPerFile = 500000). More about commands and valid variables you can find in inline help.


Can I use custom variables inside ADOQE SQL statements?

The answer is: YES. You can define as many custom variables as you want and use them in your script. Variables can have any value. Variables must be defined in one of two ways:
&&DEFINE_GLOBAL
&&DEFINE_LOCAL
DEFINE_GLOBAL defines global variables, that is variable that is accessible to all SQL that are executed within a batch (a batch is a group of SQL statements executed together). DEFINE_LOCAL means that SQL variable is only accessible from within SQL that follows definition of local variable. For example, let's assume this script:
Whenever you want to use a variable inside SQL statement, you must enclose variable's name inside percent signs (%). E.g. %VAT%.
&&DEFINE_GLOBAL: PRICE_WITH_VAT = 1.2
&&DEFINE_LOCAL: PRICE_WITH_DISCOUNT = 0.9
select nItemID, fItemPrice, fItemPrice * %PRICE_WITH_DISCOUNT% * %PRICE_WITH_VAT% AS TotalPrice
from sch.Items
WENT
&&DEFINE_LOCAL: PRICE_WITH_DISCOUNT = 0.8
select nItemID, fItemPrice, fItemPrice * %PRICE_WITH_DISCOUNT% * %PRICE_WITH_VAT% AS TotalPrice
from sch.Items
WENT

This script show two SQLs that can be executed together. We have one global variable and two local variables. Each local variables defines discount that is used in calculation and global variable defined VAT value used in SQL statements. These variables are replaced with actual values before execution.
Variables can be used to write all of parameters that can be changed in SQL or to write nicer SQLs with variables that are to be changed are defined upfront.
Comodo SSL