• Welcome to SQLitening Support Forum.


Welcome to the SQLitening support forums!

Main Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - JoeByrne

The SQLitening server service should use the local IP address ( 

You then need to configure the router to forward all TCP requests on the given port (51234 by default) to the local IP address of the server.

Port forwarding can be tricky, depending on the router you are using, but as long as the SQLitening service is listening to the port number that you are forwarding, you should be able to connect to it just fine.

There is the possibility that the ISP is blocking ports in this range too.  You might think of changing it to some other port number (in the SQLitening.cfg file), maybe something like 8081, or 8082.

I've done it both ways and generally speaking, if I know there will be only 2-4 people possibly accessing the database at any given time (the same 2-4 people) then I normally install the database on a 'Windows Share' folder, hopefully on a PC that will always be on (at least during 'business hours').

However, if there is a possibility more people will need the application, or more PCs will be accessing the database, or I want to be able to access the database across a WAN connection (such as the Internet) then the Client/Server model is the best approach.  The server component adds a (very) slight overhead, but you can be assured of data integrity more this way.

All in all though, I can't say that I've personally seen a problem with corrupted SQLite databases.  Admittedly, I haven't done more than a couple 2-3 apps in shared mode.
QuoteI am using slGetInsertID and have not had any problems with it. Could it be a local mode vs. remote mode issue?  I'm testing exclusively with remote mode.
Could be, I haven't really used the remote mode much, if at all actually.  I really think its more to do with AMD vs Intel, but that's only because the PCs that seem to have the problem are AMD chips.  My just be a coincidence.
BTW, MySQL's future does not look bright now that Oracle has bought Sun Micro:
I've heard this argument before, and more against this line of thought.  Personally, I don't think the Oracle purchase means anything.  There are more MySQL users than Oracle users and they tend to want totally different functions.  I don't see this as an either/or situation.
I've been away from SQLitening for some time now, working on a few apps using MySQL instead (courage coming from using SQLitening first :D) But... I'm working on a new program that requires the last insert row.  I was developing the code on a different PC than my home one, and everything worked fine.  Then when I ran the compiled version at home, this command would GPF just like before.

I downloaded the most current version of SQLite3 and SQLitening figuring that I must have a DLL conflict someplace, but nothing I did would solve the GPF on this PC.  I just can't figure out what is different, other than the fact that my home PC uses an AMD processor and the others are all Intel.

Be that as it may, I'm 99% sure it has something to do with the QUAD value used in SQLitening.  Since my SQLite apps are (probably) never going to need quad values (at least I can test for this), I decided to forgo the slGetInsetID wrapper and just call the function directly.  Maybe this will be helpful for someone else at some time.

                  ' -- Replacement for SQLItening slGetInsertID function ---
                  rs&    = slGetUnusedSetNumber
                  SQL$ = "Select last_insert_rowid();"
                  e&    = slSEL(SQL$,rs&)
                  IF slGetRow(rs&) THEN
                     LastID& = VAL(slF(1,rs&))
                  END IF
Not as far as SQLite is concerned, no.  Personally, I like separate lines for each one, unless performance is really --really-- critical (shaving a second or 2 is important).  Its my preference mainly for maintainability and debugging.  But, if you are sending the commands to a remote server, its a tad faster to send one line than multiple lines.  Only testing will determine exactly how much time is involved.

I've done it both ways.  I don't think there's a "better" way, it all depends on what you want to do.  In one case, I wanted to do the import on a PC that didn't have Tsunami installed.  It was 'better' then to dump the data to a csv file first.  In another case, I was doing a one-time conversion from Tsunami to SQLitening.  In that case, I just opened both databases in the same program, read from Tsunami and wrote to SQLitening in order.
I haven't updated it lately.  I was waiting for the new version first, but then ran out of time :)

I'm working on a couple of new projects so I don't have the time to do much more with it at the moment, but I'll get in there and update it again within the new few weeks.

If anyone is interested in adding to it, or fixing any problems, all you need do is register.  I don't have any real restrictions on editing the pages.  The support here is top-notch, but for those that like the Wiki format, its not a bad resource either :)
Sorry Gary.  I am in the process of updating my NOC which includes a new Internet connection and IP address range.

I just pushed out the DNS updates, so as soon as your cache updates, you should be able to get to the site, at twice the speed now!  Depending on your setup, this might happen right away, but more than likely, it'll take 24-36 hours.

You could drop to a command prompt and do an IPCONFIG /FLUSHDNS which should request the IP address again, however, if your USP is caching the addresses, they might not update for awhile yet.
Both would be unique.  The connection stream will be different for each client so there is no worry about one app taking the data from another. 

AFAIK, the unique RowID value is only supported when you use the AUTOINCREMENT value on the primary key field.  If you are using your own primary key value (ie: username) then you can't, I don't believe, get a value "RowID" from SQLite.

I generally use the AUTOIMCREMENT for every table and then create my own "primary key" as you have above.  Basically I do my searches on my own key value and then use the RowID where necessary (after having previously identified the proper records via my own key).

On second thought....  I did use this once and it worked, regardless of having a specific 'rowid' record defined :

SQL$ = "SELECT _ROWID_,* FROM {table} WHERE {clause}"

For Example:

slSEL "SELECT _ROWID_, Password FROM Accounts WHERE UserName ='John Smith'",1

IF slGetRow(1) THEN
   RowID& = VAL(slF(1),1)
   PW$  = slFN("Password",1)

Not sure if there is another way, but how about using PB's $NUL values?

slSel "SELECT * FROM Accounts WHERE Username = '" + sUser + "' AND ConHndl = "$NUL"

Don't forget your record set too.
Quote from: Bern Ertl on January 29, 2009, 05:43:43 PM
If I read a table into memory and then delete a record by it's RowID index, what happens to the RowIDs for remaining records in the table?  Do they remain the same, or get re-indexed?  Assuming they stay the same, will new records to that table re-use indexes from deleted records, or are the RowIDs always unique even if the record is deleted?

RowIDs are always unique, they never get re-used.  If you create a table with 100 records, then delete 100 records, then add one, it will be assigned RowID 101.
To the best of my knowledge, there is no 'simple' way to count the number of rows returned.  You'd have to do that manually.

There isn't anything inherent in SQLite for this, but a number of options can be used. 

Windows has a messenger service (NOT the 'chat' application), but many people disable this as it was a potential cause of spyware early on.

You can write your own, which I have done from time to time, and seems to be the most reliable.  The client(s) should run a timer to check for any new "mail" at a fixed interval, say every 60 seconds.  There's probably a "WaitForSingleObject" way to do it too, but I've not gone down that route.