• Welcome, Guest. Please login.
May 11, 2021, 08:00:54 PM


Welcome to the SQLitening support forums!

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Mark Strickland

There was a previous post from late 2008 describing a problem with a JOIN where the column names match.


I have tried all of the suggestions and partially solved the problem by changing the name of one column.  I am now trying to JOIN a single table multiple times to the main row but I have the same problem.  Because I want to JOIN the same table multiple times I cannot change any column names.

Here is the SQL statement:

    SQL = "SELECT RxRecs.Rowid, * FROM RxRecs " + _
           " INNER JOIN RfRecs2   ON RfRecs2.RfRxNum           = RxRecs.RxNum " + _
           " INNER JOIN Patients  ON Patients.PtNum            = RxRecs.RxPtNum " + _
           " INNER JOIN Doctors   ON Doctors.DrNum             = RxRecs.RxDrNum " + _
           " INNER JOIN Drugs     ON Drugs.DgNum               = RxRecs.RxDgNum " + _
           " INNER JOIN SigRecs   ON SigRecs.SgNum             = RxRecs.RxSgNum " + _
           " INNER JOIN PriceTbls ON PriceTbls.PrcNum          = RxRecs.RxPrcNum " + _
           " LEFT  JOIN Insurances AS I1 ON I1.InsNum          = Patients.PtIns1PlanN " + _
           " LEFT  JOIN Insurances AS I2 ON I2.InsNum          = Patients.PtIns1PlanN " + _
           " LEFT  JOIN Insurances AS RfI1 ON RfI1.InsNum      = RfRecs2.RfClm1InsN " + _
           " LEFT  JOIN Insurances AS RfI2 ON RfI2.InsNum      = RfRecs2.RfClm2InsN  WHERE " + _
           " RxRecs.RxNum = 239363"

This works fine in SQLmaestro.

The need for this more complex query with JOINs came about because of a speed issue to read these tables in multiple single table SELECTs when the DB is accessed over a LAN network.

Is there any way to work around this problem?

What type of BEGIN does the B1 flag do?

Based on what I see on Sqlite.org (link above) he would need a BEGIN IMMEDIATE.  This would allow other reads but stop all other BEGIN IMMEDIATE's until the COMMIT.

Good to know that the TRANSACTIONS in SQLite do guarantee exclusive access.  That solves half of the problem.

The other half is when there are interactive users involved and you read a record, let it sit on the screen, then do an update.  The update can happen in a few seconds or after hours of waiting (after the person returns from lunch or a long phone call!).

The only satisfactory way I have found that does not lock out other users is to re-read the record just prior to the update and if it has changed reject the transaction by refreshing the screen with the new data and make the user input their changes again.  Some of the multi-million dollar "big boy" systems use this as a standard technique.  Unless you update the same row repeatedly it does not cause too many problems.

You can get into other ideas like timing out a screen that has read a record and locked it for update making the user reload the record if they wait too long to save it.

If you use some lock method that leaves the record locked upon reading it you end up locking out other users for potentially long periods of time.  In the case of going to lunch (see above) it becomes impractical.  If you don't know who locked the record then it is even more complex because you cannot track down the user that has the record on their screen to get them to release it.  I suppose the proper lock method could show the user who owns the lock to you could track them down.

My guess is many interactive systems don't handle this properly and sometimes do what is called "Interleaved Updates" and don't know it happened since it is not reported as an error.

Ah the joys of interactive computing.
Locking can fail if not designed correctly.  Since SQLite does not have any mechanism then you will have to define your own.  I suggest some Googling.

Think about these issues:
1) Deadlocks - Process A requests a lock on record 1 then record 2 to do an update, Process B requests a lock on record 2 first then record 1.  If B requests the lock on record 2 in between the requests by A you will be deadlocked (sometimes called Fatal Embrace).

2) Releasing orphaned locks promptly.  Paul mentioned this one.  If you don't handle this the server will quickly deteriorate.

3) If the locking scheme can be interrupted at the thread level it can still fail.  Process A starts a lock request and determines the resource is available but before placing the lock flag the OS reschedules the thread and it is temporarily suspended.  Process B requests a lock and completes.  Process A gets its thread rescheduled to pick up where it left off and assumes it can complete the lock.  Both Process A and B now have locks on the same record.

I have not recently done any work with SQLitening but my understanding it queues requests and handles them one at a time.  This is how it makes SQLite multi user.  Since SQLitening allows (in some places) multiple SQL statements to be stacked in a single request you probably can construct some SQL statements that will check for a current lock and either return that the requested record is locked by another process or actually place the desired lock and return with success.  This would all be done in a SQL application level table of locks much as Paul described but his solution would be in memory.  This approach would use the "one request at a time" inherent feature of SQLitening to avoid #3 above.

Testing locking can be "interesting" since failure may only occur in a very narrow slice of time.

One other trick, if this is used in all interactive processes, is this technique:
1) Read the record at the beginning of the "transaction" and save it in a temp buffer
2) Interact with the user
3) At the point of save read the record again into a second temp buffer
4) If the two temp buffers are not equal the record has changed so you reject the input by the user and make them do it again.

This still has issues but if you need to leave a lock open for a very long time (user typing, answering the phone, etc) and you do not frequently update the same records it has some merit.  You must lock for a moment at the end if you determine the update is allowed since nothing changed.

If you do this with "system" processes you will have to build some additional logic in the code to handle things when the record changes.

Food for thought.

As I have been working with SQLiteing my SQLite query tool has proved to be very valuable.

There are TWO SEPARATE problems for folks trying to use SQLitening and learn SQL at the same time.

If you get a query tool that can connect directly to a SQLite DB you can figure out the SQL code THEN put it in your PB program using SQLiteing with almost a cut and paste.  At that point you will know your SQL code is working.

You can also do the opposite by doing things in your PB program to output the generated SQL statements by displaying them to cut and paste back into your SQL query tool or even have the PB program simply stuff them directly into the clipboard for a direct paste.

I use SQLMaestro but below is a link to a large number of options and many are free.


These will let you play with SQL statements and even use the EXPLAIN QUERY PLAN (mentioned in a previous post) to help you figure out how SQLite will approach finding and returning your data.
I have had a problem trying to reproduce some code in SQLiteing to match the functionality of Cheetah and how indexes work.  Since SQLite does not have CURSORS I have had to be more creative.

In SQL you cannot force the use of a specific index.  SQL engines are generally smart enough to determine when to use an available index file.  SQLite will do that but it has more limitations than some other SQL engines.

Consider an index that looks like this that indexes the LAST and FIRST names of a patient:


If you do this SELECT SQLite will NOT use the index.


This is because the index is a compound key (multiple columns) and the inequality is on a column that is NOT the last column in the index.

SQLite will do a serial read on the table from start to finish to find all of the matches.  Without and ORDER BY clause they will be returned in the order they are found in the table.

If the index had a single column it would work similar to Cheetah and start with the first key and read forward by the index from that position.

Here is some SQLite info that helps explain things:

To see what YOUR query is going to do put this in front of your SELECT statement:

explain query plan

explain query plan

You can also just use EXPLAIN but you will get some SQLite internal information that is less useful.
Since we are starting to play with SQL I think possibly a new forum topic may be in order.  There have been several posts from people wanting to know how to get started with SQL.  Because of the non procedural nature of SQL it involves significantly different thinking when it comes to file access.

Here is an example of something I did in Cheetah and had to re-think in SQL.  It would be nice to have a place to post these kinds of "SQL Tips and Tricks" as they relate specifically to SQLite and SQLitening.

My Cheetah algorithm to search for a person in a list of names:
1) Find a record by a partial name entered by the user (PATIENT NAME)
2) Read the next 4 records in key sequence by PATIENT NAME
3) Read the previous 4 records in key sequence by PATIENT NAME
4) Display a list of 7 records with the one you searched for in the CENTER of the list

Because SQL is non procedural this is how I am getting the list.  I still have to process the list procedurally because if you are at the front or back of the file there may not be 4 records before or after the desired record and you will have to figure out what goes in the center of the result set.  It may be possible to do this in PURE SQL but I have not yet figured it out.

select * from

  (select 'A' as src,* from pat where upper(pat_lname) <= 'SCHEFFLER' and pat_number <> '00010965'
     order by pat_lname desc limit 4)

select * from
  (select 'B' as scr,* from pat where pat_number = '00010965')

select * from
  (select 'C' as scr,* from pat where upper(pat_lname) >= 'SCHEFFLER' and pat_number <> '00010965'
     order by pat_lname asc limit 4)

  order by pat_lname

The 'A', 'B', 'C' column names are to show the source of the record in the result set.

I also discovered some other interesting "features" of SQLite if index files are used or not used based on the index structure and the nature of the query.

If we had a place I would post the parts of a working example and the DB.  Others could post things they have found and the orginal topic could be kept pure for the development activity of SQLitening.

Food for thought.


My invalid column name was a typo in a program I was converting from Cheetah but it was a bit of a suprise that the program just ended when I closed the POP UP with the error message.

I suppose you are correct that it should never happen in a fully debugged and working program but I just like the ability to "stay in control".  When you have a program for sale you need to add some extra "rubber bumpers" to log errors, be more informative, etc.

The extra functions are probably a good way to handle the issue.  I could see a need in a DB conversion program where it would be required to determine if the DB had yet been modified.  The extra functions will make that easy.

I have this happen from time to time now when I need to add a feature to a program that is installed on many machines.  In Cheetah I look at the current structure of the DB.  If it needs modifying then I rename the old Cheetah file, create a new empty file with the new structure, then copy the data.

Thanks again for the hard work.


You may want to consider how you are handling error on some of the functions to allow the return of an error code rather than just ending the program.  Some functions have he ModChar feature and do handle errors gracefully.

slFN will abort if an invalid field name is passed.  I would prefer some sort of option to simply report an error and let me handle that gracefully.

Maybe a config option could change the abort behavior and report an error in the slGetError function rather than abort.  Alternatively you could add a parm to the function call like the ModChar in other functions.  Because these functions in question try to return a value (not an error) you could return the error in an alternate parm.

MyField = slFN("col_name",error_long)
if error_long then .....



Is FF3 going to have some direct support for SQLitening?



Yes some SQL DB's do have different "personalities".  Oracle and other "brand name" engines tend to have their own extensions.  In general (beyond the administration of the engine itself) SQL can be fairly standard if you stay away from the extensions added by various "brands".  SQLite is fairly "standard" from what I have seen.

One of the other differences with SQL and indexed data file managers like Cheetah and Tsunami is how you have to think about getting to the data.

JOINs are very powerful and let you create a "connected" set of info that involve multiple tables (files).  There are ways to summarize the data from a SQL statement using GROUP BY and your application code does not have to do any summing or other logic.  Sub Selects let you do some other interesting things (a SELECT within a SELECT).

I have even JOINed a table to itself in one application.  I got that tip from a SQL guru and it was very much counter intuitive as I was trying to make the transition from things like Cheetah to SQL.

One key thing is you must remember that "standard" SQL statements are NON procedural and you cannot think of the data retrieval as you did in Cheetah (i.e. Read a header record for an order, read the customer record for that header, have a sub-loop to read the detail for the order lines, read the part master for each part number of each line, etc).  In SQL you can do this whole thing in one single SQL statement and the simply step through a linear list of records one at a time with everything you need already assembled into one logical record.

It is a bit of a mental "change of gears" but once you see how it can work you won't ever want to go back.

Find a good SQL tutorial (print or Web) and look at lots of examples.  If you can get the "hang" of JOINs then you will have "won about 75% of the battle".

Get a good SQL query tool and play with your DB constructing the SQL statements on the fly and see the instant results.  I wrote a Cheetah2SQL tool (posted here) that will help you convert some of your old Cheetah files to SQLite.  If you understand the data in the Cheetah form then it will help you make the transition to SQL.  Once you have an SQL statement working the way you want then put it in some PB code via SQLitening.

I use SQL Maestro (not free except the ODBC version).  I think Paul has found a free one for SQLite (SQLite3explorer - http://www.singular.gr/sqlite/).  Here is another one (http://www.sqliteexpert.com/) but it is not free.

ALSO -- Most SQL DB's have the EXPLAIN verb.  It will return how the server approaches finding and returning data.  It is very helpful if you have speed issues.  Because JOIN's all happen "under the hood" you may get things running slow on some queries.  EXPLAIN will show you how the "engine" optimizes the query and uses index files.

Good luck.

Mark Strickland
I have been trying to use an explicit record set number and have found a few "funny" things.

1) If you do an   slCloseSet(setnumber)   when no set is open you get a GPF.

2) There is no way (I can see) to determine if a set is currently open.

3) It appears that doing an slSEL with a set number that is currently open will not return any records (gets a -14 error).

My intent is to use multiple record sets for query's that need to be "left open" based on the program logic.
i.e. -- SELECT A, B, C from Table1 where ...  and begin processing rows from that set.  In some cases while processing that set (Table1) I will need rows from a different table.

Any assistance would be appreciated.

Thanks, Mark
I can go either way but I had already figured out my "work around".


Here is the Cheetah2SQLitening data migration tool that has been modified to use Fred Meier's  SQLitening code.  You will need to download Fred's SQLitening code to get the SQLiteningServerAdmin program to install the server and get some basic documentation.

Previously this utility could create physical DB files in the server but that feature is now limited to a direct connection to a SQLite DB.  It can start the server if it is already installed but not running.

This should still be considered BETA code since I have done only limited testing.

After reading your responses I agree with you that the server version should NOT be allowed to create a physical DB file.  That makes sense.  Typically the creation of a physical DB is NOT something done on the fly by an application program.

My intial "need" to create the physical DB came from converting the Cheetah 2 SQLite utility that I previously posted to the FF Source Code Forum.  It had the option to create the file if desired.  I kept the option in the program but it only now works for a direct connect DB.

If needed the programmer could create a DB using the direct connect mode then copy it to the server data path (from the QUERY_SERVICE_CONFIG of the running server) if the server file permissions allowed that access.

As far as I am concerned, at this point, I think you are right on.  Unless the concensus is at create option is needed then don't change it.

I believe SQLitening is one of the most elegant DB solutions on the planet.  Couple that with FireFly and PB ... nothing is better.   ;D