Washington Area Informix User Group Newsletter


Volume 3, No. 6 - December 1993


Highlights of this Issue

Indexing Strategies, by Kevin Fennimore

An Introduction to Using SQL in Shell Scripts, by Lester Knutsen

Informix Online Database Administration Tips, by Madhu Reddy

WAIUG Forum 93

On November 16, 1993 the Washington Area Informix Users Group held its first annual user group forum at the Springfield Hilton. The forum was a one day event with 16 sessions on Informix database development, and 12 exhibitors demonstrating exciting new database tools and products. Over 230 people registered for the forum, exceeding our plans for 150 people. Each person who registered received a diskette of public domain software and a one year membership in the user group.

Several participants said that there were so many good sessions going on at one time that it made it very difficult to decide which one to attend. We had standing room only at a few sessions and the session on "Tips for Informix Programming" was so full that people could not get in. We have invited Kevin Fennimore to recap this session again at out next user group meeting. The Question and Answer session was also very well received with participants getting answers to questions that had been predicaments. The forum was a great success and we plan to hold another one next year. We will have it at a larger location so there will be more people, more sessions and more exhibitors.

Next Meeting

Several members have suggested that we hold our meetings in the evening. Others have asked us to have meetings in Maryland. We have decided to implement both these suggestions. The Henry M. Jackson Foundation has very graciously offered us the use of their conference room for our February meeting. The meeting will include two items on the agenda:

Date: February 9th, 1994, 5:00 pm to 9:00 pm

Location: Henry M. Jackson Foundation

1401 Rockville Pike, Suite 600, Rockville, MD 20852

Newsletter Sponsorship

The user group has been supported by many companies over the last three years. If your company would like to sponsor a mailing of the newsletter, please call for more information. This issue of the newsletter is sponsored by MILVETS System Technology, Inc.

Plans for 1994

We plan to have quarterly meetings and publish four newsletters this next year. The first meeting in February is described earlier in this newsletter. We are still exploring options for our second meeting. One possibility is to have a joint meeting in May with the Windows NT user group and have a look at Informix SE on Windows NT. The third meeting will be in August at the Informix office to review new products from Informix and the results of the Informix world wide user group conference. The fourth meeting will be Forum 94 in November.

User Group Membership Dues in 94

Our user group has grown from 130 names to over 430 names on our mailing list this year. We have incorporated as a non-profit organization. Our objective is to share information, learn about new products, provide technical input, discuss problems, share solutions, and network with other Informix users. In order to cover our expenses and support this growth, the board of directors decided to implement membership dues in 1994. The membership dues will be $20.00. Everyone who registered and paid for the user group forum will automatically be covered as a member for one year. Membership will allow you to receive all our newsletters and a discount on our forum next year. We will discuss this at our February meeting and additional information will be sent out in the next newsletter. Please call me if you have any questions or suggestions or would like to volunteer to participate on the board of directors.


Indexing Strategies

by Kevin Fennimore

One of the most important and often overlooked areas of database performance and tuning is indexing strategies. In many cases, database performance can be improved simply by using indexes efficiently and effectively. An understanding of the index structure and a few basic indexing strategies is necessary in order to achieve optimal performance from the database engine.

Informix uses the B+ tree architecture to maintain indexes. A B+ tree is the same as a B-tree except that the B+ tree contains more than one key value and more than two pointers per node. The tree is organized into levels, with Level 0 containing the leaf nodes which point to the actual data elements and a root node representing the beginning of the tree (see diagram below). The number of key value/pointer combinations (also known as index entries) contained in the node is dependent on the size of the key value.

B+ Tree Diagram

When an index is searched, the root node is read and the next node to read is determined based on the search value. This continues until a level 0 node is read. This node will contain the pointer to the actual data for the desired search value. This means that the more levels you have in your index tree the more reads that are required to search the index.

As the index tree grows its nodes will become full. When this occurs, the full node is split in two and the middle key value is inserted in the node at the next higher level with a pointer to the newly created node. If the node at the next higher level is full then that node is split and so on. When the root node becomes full, the root node splits forming a new node and adding another level to the B+ tree.

An index may be either unique or duplicate and may also be composite and/or clustered. A unique index is one where all of the key values have a different value. A duplicate index is one that allows different rows to have the same value.

An index that is built on two or more columns is called a composite index. Composite indexes are most commonly used to create a unique index key for tables. Other uses include speeding up queries involving joins, filters and ORDER BY's on multiple columns. When Informix uses indexes it is able to use what is known as the leading portion of a composite index key. Consider the following example:

Table tab1 has a composite index on columns col1, col2 and col3. When a select is run that involves a join, filter or ORDER BY on these three columns the engine is able to use this index. In addition, the engine may also use it when performing any of the aforementioned functions in a select on col1; and likewise on col1 and col2. This is due to the fact that these columns are in the beginning or leading portion of the composite index. Additional indexes on column col1 or columns col1 and col2 are unnecessary. Unfortunately, queries involving columns col2, col3 or col2 and col3 would not make use of the composite index on all three columns because these columns are not in the leading portion of the index.

Another attribute an index can have is that of clustered. A clustered index physically writes the data records in the order of the index. This type of index is useful primarily for lookup tables that are heavily queried and seldom updated. When a record is read from the database a block of records are actually physically read into memory. If records are read in an order similar to the clustered index then one physical read will put many records into memory and subsequent ordered reads of the data will be done from memory instead of another physical read. The problem with a clustered index is that the order of the data is not updated as rows are added or updated. This causes the physical ordering to be different from the index which could reduce the performance gains.

The physical order of the data can be regained by running an alter index statement as follows:

ALTER INDEX clustered-index-name TO CLUSTER

Informix will make a new copy of the table writing the data in the order of the index and then drop the original table; note that this requires enough disk space to have two copies of the table. This operation should be done on a regular basis for modified tables where clustered indexing is beneficial.

There are two basic costs of using indexes on tables. The first is the time necessary to update the indexes when changes are made to the table(INSERTs, UPDATEs and DELETEs). INSERTs require a new index entry to be added, DELETEs null out the pointer in the node and UPDATEs of the key value delete the existing value and then re-insert the new value.

The other cost is disk space. Indexes require 4 bytes of storage for each data pointer, space for the key values and space for the nodes in the B+ tree. This space requirement often adds up very quickly.

The benefits of indexing include enforcement of unique keys(unique indexes), improved query and sorting performance and Key-Only reads of data. Having a unique index on a table ensures that only one row in the table will have a particular value in a column or group of columns.

Query performance is increased by creating indexes on columns that are part of filter criteria and columns used in joins of tables. This is due to the use of indexed reads to find data rows as opposed to sequential reads of the data file. Sorting performance is also increased by creating indexes on columns used in the ORDER BY since the engine can do an indexed read to read the data in the order of the index instead of having to sort the data after it is read.

With OnLine, there is the capability of doing what is known as Key-Only reads. This is when the selected columns are in an index that is being used and the values can be read directly from the key instead of accessing the data to get the values.

Several indexing guidelines may be applied to a database. When applying these guidelines and adding indexes care should be taken to weigh the benefits against the costs. The first guideline is to index columns that are involved in joining tables together. When tables are joined and no indexes exist on the joined columns, the engine will sequentially scan the tables(which is very slow) or decide to create a temporary index on the columns(versions 4.0 and 4.1) or use the new sort merge join(versions >5.0); this is normally faster than sequential scans.

When using a column as a filter on a large table it is a good idea to look at indexing that column. A filter is a criteria on a particular column (i.e. WHERE cust_num>= 100). If the column is indexed the engine will use that index to read the desired rows of data without sequentially scanning the table. However, if the filter would result in a majority of the rows in the table being returned it would be just as efficient to read the table sequentially since most of the rows will be read anyway and there wouldn't be the overhead of reading the index file.

Indexes on columns that are highly duplicated should be avoided. These include columns such as sex( values of M or F) and columns that contain yes or no values. Duplicates values in an index are grouped together as a list. As the number of duplicates increase so does the length of the list for that value. When a value is added to the list it is inserted at the end of the list. When a delete is performed the list must be searched from the beginning until the row to be deleted is found and then the list must be rewritten. Columns that have only a few duplicated values do not cause a problem since their associated lists are not very long and therefore searching and modifying the lists is not as costly. If a highly duplicated column must be indexed then the column could be combined with another field in the table to form a less duplicate composite index.

As mentioned previously, building indexes on columns that are used in the ORDER BY clause of a select can improve the performance of the query. This is also true for column used in a GROUP BY clause since the engine must sort the rows in order to perform the grouping. If the query being performed is very complex there is a chance that the engine may not use the index on the order by columns. The output of set explain is useful in determining if the index is being used.

The size of the key values in an index are very important; the smaller the key the better. When key sizes are small more of them will fit into a node of the B+ tree which reduces the number of nodes in the tree. This in turn reduces the number of physical reads needed to find a value in the index. Smaller keys also require less disk space. An exception to this rule is when using the key-only reads available in OnLine. If all of the columns selected are contained in the key value it is more efficient to read from the index and not access the table data which makes an index with larger keys more efficient.

In addition to the size of keys being small, the type of the columns in the key are also important. Keys built on CHARACTER columns are less efficient than those built on numeric fields such as INTEGER and SMALLINT. CHARACTER columns are normally longer than numeric columns. CHARACTERs also require the engine to examine each character to determine equality whereas numeric columns only require one comparison.

The last guideline is not to heavily index highly volatile tables (ones that have a large number of modifications). As mentioned earlier, when a row is modified in a table the index information must also be modified. If a table has 10 indexes it will take longer to modify a row in that table than a row in a table with only 2 indexes.

To avoid the overhead of indexes when doing massive modifications to a table, drop all of the non-unique indexes from the table, do the modifications and then rebuild the indexes. This eliminates the overhead involved in modifying the indexes and it also produces new indexes which will not be fragmented. This should also be done when doing loads into the database to increase the performance of the load.

As is always the case there are special circumstances where these guidelines may not apply but in general these guidelines should be useful in creating indexes that are effective and efficient. The output of the SET EXPLAIN command is a good means for evaluating the effectiveness and uses of indexes on a given query. Unfortunately there is not room for a discussion on that so it can wait for another time (or article).

Kevin Fennimore

Summit Data Group

100 RockHaven Rd., Apt H203, Carrboro, NC 27510

Phone: 919/933-8934


An Introduction to Using SQL in Shell Scripts.

by Lester Knutsen

One of the advantages of UNIX is the power of shell scripts for developing systems. This article is an introduction to using shell scripts with embedded SQL to access your database. I am using the Informix database for these examples, and the Informix SQL command interpreter "dbaccess". However, these examples will also work with Informix "isql" and should work with any database that lets you redirect standard input and output. The basic items we will examine are; redirecting input and output, passing shell variables to SQL, and setting shell variables with the results of SQL commands.

1. Redirecting Input

One way to include SQL commands in a shell script is to redirect standard input to "dbaccess" from within the shell script. The following is an example shell script:

Figure 1.
#!/bin/sh
dbaccess - - <<SQLSTMT
database stores5;
select customer_num, fname, lname, company from customer;
SQLSTMT

When dbaccess starts, it expects a database name and sql script name as its arguments, or it will display its menus and prompt you for them. The two dashes "- -" indicate that the database and commands will come from standard input. The "<<" indicates to the shell that standard input is redirected, and that everything between the two statements "SQLSTMT" is to be passed to dbaccess as standard input. This is called a "here document" in shell scripts. The program dbaccess treats these lines as if you had typed them in from the keyboard. This is like typing dbaccess - - <filename where filename is a file with the SQL commands. You do not have to use the words SQLSTMT, but you do need two identical words to mark the beginning and end of input redirection. Running this script will start dbaccess and process SQL commands. The first command will open the stores5 database and the next command will display the name and company of all the customers.

2. Redirecting Output

If you have a large customer table the output will scroll off the screen. Dbaccess sends its output to two standard devices that are normally defined as your terminal. Data goes to standard output, and processing messages to standard error. In the above example the names and companies are sent to standard output and the two messages, "Database selected" and "99 row(s) retrieved" are sent to standard error. These can be redirected to a file by changing line number 3 in the Figure 1 example to:

dbaccess - - >cust.rpt 2>error.log <<SQLSTMT

The first ">" sends standard out (data) to a file "cust.rpt" and the "2>" sends standard error (messages) to a error.log.

What is more useful is to send data to a paging program like "more" and messages to a log file. Figure 2 is an example:

Figure 2.
#!/bin/sh
{ 
dbaccess - - 2>error.log <<SQLSTMT
database stores5;
select customer_num, fname, lname, company from customer;
SQLSTMT
} | more

The first ">" has been removed and a pair of "{ }" added. The pair of "{ }" instruct the shell to execute the inclosed statements as a group. This is useful to pipe the output to another program like more.

3. Using Shell Variables

Shell variables and prompts can be used with SQL. The following example prompts for a company name, and passes the variable to SQL to be used in the select statement. Entering an "A*" at the prompt would select all companies which name begins with the letter "A".

Figure 3.
#!/bin/sh
echo "Enter company name (use * for wildcard matches) to select"
echo "Company : \c"
read comp
dbaccess - - 2>error.log <<SQLSTMT
database stores5;
select customer_num, fname, lname, company from customer
where company matches "$comp";
SQLSTMT

4. Getting Data Into Shell Variables

The results of an SQL command can be inserted into shell variables. The following is a simple example of a mail-merge program, selecting names and companies from a database, and setting shell variables and merging that data with some text. There are better ways to do this with the programming tools that come with a database, but this illustrates the power of embedding SQL in shell scripts.

Figure 4.
#!/bin/sh
today=`date +%m/%d/%y`		# get today's date
{
dbaccess - - 2>error.log <<SQLSTMT
	database stores5;
	output to pipe "pr -t" without headings
	select customer_num, fname,lname,company from customer;
SQLSTMT
} | while read line 	# pipe the output to while read
do
if [ "$line" ] # check if line is not NULL 
then 
	# First parse the line into words/variables using set
	set $line	# assign the line to positional variables
	name="$2 $3"	# get the second and third variable for name
	# company name may include spaces, $4 is only the first word
	# so we discard the first 3 positions and assign the
	# rest of the line to the comp variable
	shift 3		# discard the first three variables
	comp="$*"	# let all remaining variables = the company
	## Start of simple form letter`
	echo "Date: $today"
	echo "To:   $name"
	echo "      $comp"
	echo "Thank you for your business"
fi
done

Figure 4. Example output
Date: 11/15/93
To:   Frank Lessor
      Phoenix University
Thank you for your business

In this example the output is sent to a while loop. The while loop reads each line of output until its done. Each line is broken apart into words by the set command. The first word is assigned $1, the second $2. This gets the name of the person. The company name is more difficult because it may contain spaces. The name "Big Company" would be broken into to two variables. The command "shift 3" discards the first three variables and what was $4 becomes $1. All remaining variables are assigned to the company name with "comp=$*".

This has just touched the surface of what can be done with embedded SQL in shell scripts. It does have its limits, but a lot can be accomplished. Complex applications like billing systems and scheduling systems can be developed using SQL in shell scripts.

Lester Knutsen Email: lester@access.digex.net

Advanced DataTools Corporation Phone: 702-256-0267


INFORMIX-OnLine Database Administration Tips

by Madhu Reddy

This article provides the information and tips that will help to perform the Database Administration responsibilities on systems written using UNIX and INFORMIX 5.0. Basically the database administration has two major phases to perform. The first step is to Set up the database(s) and the second step is to maintain the database(s).

The first phase, set up the database(s) includes three steps planning, creating an instance and creating the database(s). Planning is the most important component of the first phase. In the planning step you need to make many decisions. However the important decisions to be made in the step planning are:

1. Create a logical name for each physical device. These logical names provide you the flexibility to replace the physical devices while restoring the system when the disk corruption occurs.

2. Decide the number of dbspaces to be created. It is better to have more dbspaces, it makes reorganizations easier (discussed in the maintenance phase) and may require more disk space for altering and clustering of tables. The BLOBs can be used to store the word processing documents or image data, then decide the number of blobspaces spaces to be created.

3. One of the major problem with Informix-OnLine is how to decide the logical log size. The logical log size can't be altered unless you recreate the instance. This is hard to decide without information on the transactions to be processed. If the logical log size is too small it fills quickly. If it is too big it takes a long time to fill. Backup of the logical log will not be taken until it is full. Because of the above reasons, you need to decide a reasonable size for the logical log. A logical log size of 6MB to 10MB with a number of logical logs set at 30 has worked fine for the applications I worked.

4. In general Informix suggests the physical log size to be two times the size of logical log. This size can be altered later.

5. Most of the parameters are tunable except the logical log size. The important tunable parameters are the number of locks, the number of buffers, the physical log size, the checkpoint time, the logical log buffer size,the physical log buffer size, the page cleaners and the LRUS. Since these parameters are tunable you need not spend more time to decide on exact values.

6. The Informix manual is not clear that ARCHIVE (backup) can be taken to the raw disk. The archive can be taken to the raw disk, cooked file or tape. If you have enough disk space it is advantageous to take ARCHIVE to the raw disk and copy the raw disk image later to the tape. This provides two copies of the backup. In addition archive to the raw disk is faster when compared to the tape. However, the raw disk space required for archive must be greater than or equal to the total allocated table space of all tables.

7. Logical log backups can be created on tape, raw disk or cooked file. It is not advisable to create a logical log backup on a raw disk, because there is no way of knowing when the backup device is out of space. The only way is to check the log file for error.

8. Estimate the table space size (extent and next size) for each table is based on the number of current rows, the number of indexes, and future expansion.

9. For each table decide the locking mechanism to be used either row or page. Page level locking uses less resources (shared memory), and row level locking provides more concurrency.

10. Mark the table level usage low, heavy, medium, very low and very high and distribute heavily used tables among different dbspaces.

A picture is worth a thousand words, after the plan come up with the following pictures, and initial Configuration work sheet as shown in the INFORMIX-OnLine Manual (1-19).

Figure 1 - DBSPACE to Raw partition mapping 

DBSPACE CHUNK Logical Offset Length Physical Mirror Part. Name number name device device #
dbspace1 1 /dev/dbraw11 0 271548 phdsk20 phdsk30 i dbspace1 2 /dev/dbraw12 0 271548 phdsk20 phdsk30 j dbspace1 3 /dev/dbraw13 0 271548 phdsk21 phdsk31 k dbspace2 1 /dev/dbraw21 0 271548 phdsk21 phdsk31 i dbspace2 2 /dev/dbraw22 0 271548 phdsk21 phdsk31 j dbspace3 1 /dev/dbraw41 0 6000 phdsk40 phdsk50 o dbspace3 2 /dev/dbraw41 6000 1080000 phdsk40 phdsk50 o dbspace4 1 /dev/dbraw14 0 271548 phdsk20 phdsk30 l dbspace4 3 /dev/dbraw51 0 543096 phdsk40 phdsk50 n rootdbspace 1 /dev/dbraw31 0 200000 phdsk22 phdsk32 m logdbspace 1 /dev/dbraw31 200000 300000 phdsk22 phdsk32 m not used 1 /dev/dbraw31 500000 43100 phdsk22 phdsk32 m (1) /dev/archive - 1080 MB phdsk20 f (2) /logbkup - 530 MB phdsk41 phdsk51 g (3) /maintenance - 530 MB phdsk50 - g Note(s): (1) - archive space is raw and requires no dbspaces (2) - Logical log backup (/logbkup) space is a cooked Unix filesystem (3) - /maintenance partition is a Unix filesystem


Figure 2 - Table Extent Calculations and dbspace Assignments

Extent Next Usage Table Dbspace Size Size Lock Mode
M table1 dbspace1 7000 1000 row VL table2 9000 1000 page L table7 20 8 row VH table12 28000 3000 page H table13 60000 2500 row L table14 dbspace2 6000 500 row M table16 13000 1000 row VL table17 16 8 page VH table24 25000 2500 page H table27 dbspace3 200000 30000 page M table28 dbspace4 3000 500 row VH table29 13500 1500 row L table31 20000 2500 page VH table35 rootdbspace 100 100 row


Figure 3 - Raw Disk Partitions

/dev/dbraw11 /dev/dbraw12 /dev/dbraw21 /dev/dbraw13 /dev/dbraw22 0 |-------------------------| 0 |-------------------------| | dbspace1 | | dbspace2 | | | | | 814,644 |-------------------------| 543,096 |-------------------------| /dev/dbraw14 /dev/dbraw41 /dev/dbraw51 0 |-------------------------| 0 |-------------------------| | dbspace3 | | dbspace4 | | | | | 6,000 |-------------------------| | | | dbspace3 | | | | | | | 1,080,000 |-------------------------| 816,644 |-------------------------| /dev/dbraw31 /dev/archive 0 |-------------------------| 0 |-------------------------| | rootdbspace | | No spaces assigned | | | | | 200,000 |-------------------------| | (For Level 0 archive) | | logdbspace | | | | | | | 500,000 |-------------------------| | | | not used | | | | | | | 543,100 | ------------------------| 1,080,000 |-------------------------| /logbkup 0 |-------------------------| | Mounted Unix filesystem | | (for logical log backup)| | | 530,000 |-------------------------|

Note: Two chunks has been created in the dbspace3 to avoid the 2% space reservation set by the Informix


The second step is to create an instance. Since you collected all the information, create an INFORMIX OnLine instance using the utility TBMONITOR by following the instructions given in the INFORMIX-OnLine Administration Manual. In this step you create all the dbspaces, log spaces, initialize shared memory and move logical logs and physical logs to other dbspaces (say logdbspace). After initialization bring INFORMIX instance to online.

The third step is to create database(s). If your current environment is INFORMIX-SE, get the current database dbschema in a sql file by executing the command dbschema. Now edit the sql file to include the parameters dbspaces, extent, next size and lock mode for each table. If your current environment is not Informix then prepare the DDL statements with the above parameters to create tables. Create database and table spaces using the utility dbaccess or ISQL. Populate the database tables using the utility dbload or by executing the Informix-SQL Load statement.

The maintenance Phase is the second part of database administration. This phase includes daily operations, reorganizations and cluster of tables, migrations, purging, tuning parameters, security, restore and trouble shooting problems.

The daily operations include starting the database instance, shutting down the database instance, watching for logical log backup and changing the backup of logical log file to next file, and the instance archive. Most of the above operations can be automated using the shell commands.

Clustering and Reorganizations have an important role to improve performance. The utility tbcheck (with -pe option) provides a disk allocation report for all table spaces. The table spaces may be fragmented over time. In this case you need to unload the data from all tables, recreate tables and reload the data into tables. For this reason it is better to have more smaller dbspaces. To improve the performance, it is advised to unload and reload the data by the order of the key(s) mostly used ( this method creates the data in cluster order with in the table). If you see one table is fragmented or a table has been modified heavily then it is advised to create the clustered index on this table. To create the clustered index you need an equal amount of empty space within the dbspace, because a clustered index reorganizes the data by copying into another location.

Migrations can occur in the database because of either table changes (adding, dropping and changes in column sizes ) or when columns and tables are renamed. The tables changes copies the table into another location, for this reason, you need enough empty space within the dbspace. Rename columns and tables changes system tables only and no effect on the physical table space. My advise is run the migrations with no log. If you have a bigger table with many rows then migration takes more time to run with the log.

Purging unused data is an important part of database maintenance. Since the data is not required any more, it is advisable to run a purge process with no log mode. Sometimes, it is faster to unload and load the required data than to delete the data using the SQL command. It is also important to run the statement "update statistics" after the purge process. The stored procedures can be used to write purge procedures.

In addition, the following are important Informix Tuning Parameters which are not clearly explained in the manual.

1. Get the output of tbstat -p. If percentage cashed dskreads > 95% and percentage cashed dskwrits > 82% indicates a well-tuned system. To improve the performance increase the number of buffers.

2. Get the output of tbstat -l. Increase the physical log buffer size if a close correspondence between the physical log buffer size and the average number of pages per i/o indicates that the available buffer spaces is being used to the full or near full extent. Decrease the physical log buffer size if the correspondence is less than 75%.

3. Get the output of tbstat -l. Increase the logical log buffer size if a close correspondence between the logical log buffer size and the average number of pages per i/o indicates that the available buffer spaces is being used to the full or near full extent. Decrease the logical log buffer size if the correspondence is less than 75%.

4. Get the output of tbstat -p. Increase number of table spaces, number of locks, number of users, network deadlock timeouts and number of buffers accordingly when you see a non zero values under ovtbls, ovlock, ovuser, dltouts and ovbuff.

5. Get the output of tbstat -F. Minimize the FG and LRU writes, and maximize Idle writes by decreasing LRU_MAX_DIRTY and LRU_MIN_DIRTY parameter, and by increasing LRUS. This can be achieved by trail and error, there is no direct solution. In addition the check point interval and the number of page cleaners can be modified as needed.

To Restore an instance using the Informix-OnLine is very simple. The important thing is to make sure the matching configuration file and device configuration which matches previous archive. The Informix manual explains clearly how to restore an instance.

Security can be maintained through automated procedures. I liked the utility DB Privileges developed by the Advance DataTools Corporation. This online utility provides many facilities to mange security that includes group security.

In addition the following are the main problems with Informix- OnLine.

1. Any user can bring an Informix-Online instance down by killing an Informix process with kill -9. Never kill a process with Kill -9 always use tbmode -z. (refer to the article "the Case of the Kamikaza Daemon and Other Exotica" by Ian Goddard in the last issue of Informix user Group Newsletter)

2. Informix uses more stack space when a query involves many joins and obtains data from many tables, looks to me it is a bug. Since Informix uses excess stack, UNIX kills the Informix process, so Informix-Online goes to the shutdown mode. To avoid this problem you can set the stack limit to unlimited in your program or request your system administrator to increase the stack limit.

3. When you are altering tables located in different dbspaces or creating indexes on different tables simultaneously sometimes you may end up with the error system files locked. This seems to me a bug. While creating Indexes I used the statement "lock mode wait" then I did not get this error. But I am not sure whether this is a solution or not.

4. There is no way to set the isolation while running ACE reports. But there is an undocumented feature available from Informix.

5. Informix claims "the isolation committed read guarantees that every row retrieved is committed in the table at the time that the row is retrieved. Even so, no locks are acquired. While one process uses a row, another process can acquire an exclusive lock on the same row and modify or delete data in the row". My feeling is that it is not working the way it is claimed. We used to get errors either resource locked or index locked. Theoretically to provide this facility Version Management is required. I believe version management similar to the ORACLE is not implemented within the Informix-OnLine.

I hope Informix adds following features to Informix-OnLine.

1. To remove a chunk from a dbspace.

2. To force the table to a particular chunk within the dbspace.

(this can be achieved by creating a temporary table)

3. A command line instruction to obtain the existing databases on an instance.

4. Backup and restore of a database if it is located in a particular dbspace.

My thanks to Mr. Darryl McCoy for reviewing this article.

Any questions or suggestions can be redirected to Madhu Reddy,

American Computer Technology, Inc., 10816 Estate CT, Fairfax,

VA 22030, Phone (703)-385-3273. Reddy provided database consulting services to AT&T, Department of Labor, Department of Treasury, and Department of Defense.


Washington Area Informix Users Group

The Washington Area Informix Users Group (WAIUG) is an organization for users of Informix Database Software and Tools. The group primarily serves the Washington DC, Virginia, and Maryland areas but also has members from all over the USA. The group's activities include regular meetings, a newsletter and a computer bulletin board. The group holds an annual one-day forum with exhibits and seminars for members. The group is incorporated as a not-for-profit organization and managed by a volunteer Board of Directors.

The User Group in the Washington area was started in May 1991. Nineteen people from a variety of companies and federal agencies met to hear about release 5.0 and make plans for a user group. At the meeting we identified the goals of the user group as, to share information, learn about new products, provide technical input, discuss problems, share solutions, and network with other Informix Users. Since that time we have grown to over 400 on our mailing list.


This Newsletter is published by the Washington Area Informix Users Group.

Lester Knutsen, President/Editor

Washington Area Informix Users Group

4216 Evergreen Lane, Suite 136, Annandale, VA 22003

Phone: 703-256-0267

lester@access.digex.net