You'll find two screen-shot from an application that use SQLitening with success.
On a tab called "Summary" we display information about:
- the physical server
- the database engine
- the database file
- the database management system (SQLitening).
Everyday we do :
- a copy of the database at 7:15 pm using this command:
lResult = slCopyDatabase(gDataBase+".sav","E")
- a vacuum at 9:00 pm using this command:
On another tab called "Connections & Named locks" we display:
- on the upper part the all connections to the server (it could be useful to be able to filter only the connections to the database(s) used by an application but I don't know how to do that).
- on the bottom part all the named locks on the server.
PS: by convention for my applications, all the named locks are composed of three parts:
1. A prefix with 3 characters used to identify the application.
2. The name of the table on which the named lock is set.
3. The record ID
With the first part it is easy to filter and display only the named locks associated to the current application.
SQLitening linked with SQLite is very robust; we never had a problem on the server or on the client side. Also the support with this forum is excellent. We've upgraded our version of SQLitening a week ago and so far, no problem at all.
My objective was just to share with you our beautiful experience with SQLitening and SQLite.
I hope that will give others some ideas.
Nice! Thanks for sharing.
We are still developing our SQLitening based application, so I don't have any real life experience to relate just yet, but the app we are building will have modest client/server demands compared to what some of you are doing with it.
Sure looks nice!
1) I would suggest not copying the current database over the last good database.
That puts all eggs in one basket if the source ever becomes corrupted.
If an error would occur in the current database the backup will also have it.
2) Copy might not get all the data. Xcopy /e/r/y/d c:\sql\*.* d:\sql\sqldate\*.*
3) I read on nabble that trying to correct a corrupted database is not a good practice if it were needed.
The expert , Slavik suggested saving write requests so the database can be rebuilt.
It does not slow down the system appending all slexe statements to a text(s) file.
I'm considering modifying the server as I only do it on the one client (me) while testing.
This is a very simple thing, but would greatly add to the protection of the database.
I hope others would also like this added.
4) Since using SQLitening I have had 2 bad errors.
The worst was the database was 0 length and never figured out what caused it.
I believe it was while testing bypassing the normal SQLitening routines and issuing TCP requests to the server.
I'll never know if a misaligned TCP request or some PUT command did it, but it occurred.
The only other problem was that vacuum returned an error and the database could not be repaired.
Got it working again using SQLite Expert. This is when I read on nabble that the SQLite log should have
been used to rebuild the database from the insert, update and delete statements.
This seems like a lot of work, but it is actually open "writes.sav" for append, print #, close.
This log can also be used for security purposes to see who and when every change was made.
The "writes.sav" file could be deleted on intervals or kept forever.
I also use a simple batch file to ZIP then FTP to another server. It is many times faster than using
one of the many slow commercial backup programs that run for hours.
Executing the .BAT can also be automated using task manager.
Used WinZip in this example.
1) send.bat file
wzzip backup.zip -sMyPassword
ftp -s:send.cmd mysite.com
2) send.cmd file
I have utilized SQLiteng in several production systems and I have not had any issues. I enjoy the easy of deployment and the ability to backup the databases quickly and easily. This is why I opted to use SQLitening vs MYSQL.