• Welcome to SQLitening Support Forum.


Welcome to the SQLitening support forums!

Main Menu

Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Show posts Menu

Messages - Fim

"Did you run your routine through the query planner in Sqlite3.exe?"

I use a PowerBasic-program.
All testing is done in the program that will be distributed to the users.
The last edition is available here www.ordkors.se

-- ORDKKORS.D.DB3  Distribuerad databas
-- ===================================
Drop Table if Exists Wordlist;
Create Table         Wordlist(
    Length       Integer     not null ,
    Word         Text        not null,
    Group        Text        not null ,
    Unique (Word, Length));

Drop Index if exists Length_idx;
Create Index Length_idx      on Wordlist (Length);

Drop Index if exists Word_idx;
Create Index Word_idx      on Wordlist (Word);

Drop Index if exists Group_idx;
Create Index Group_idx      on Wordlist (Group);

I can not use the LIKE because the search pattern is a bit more complex, some exempels

A??2?2A                          Means that letters 4 and 6 must be the sama letter
123456789                     Means that all letters must be different
QW,.?                               Means that the third letter will be a vowel and the fourth a vowel
I have built new and better index an now is performace very good.
Seeking for TY??A takes 0.188 seconds instead of 3.214.
Let me explain. I have a database with a table of 4,000,000 words. An index is word length. I am looking for words in this table based on word length. For example, I want to grab all the words that have a length of 5 letters and with the pattern "TY ?? A". I get 12.275 words when reading the database. 7 words of its match my word patterns. This takes 3.214 seconds which the actual reading of the database takes 2.743 seconds. This means that the reading of the database is the bottleneck. There are thousands of such a search per month.

The updating of the database occurs once per month and the time is irrelevant.

So, back to my question: Can I do something so the reading is faster at the expense of time for the update?
Is there any way to optimize the time for reading at the expense of the times for the update?

It works prefect with WHERE and bind data and slSelBind.
I have tested with 100,000 records, all with different value of C1.
If I understand you correctly, you mean you want me to read the entire table from beginning to end and makes selecting the desired entries in the program. So have I done now. It was acceptably fast, even though I have 120,000 records in the table. So I content myself with the solution.
Thanks for the help.

Fim W
The SELECT-statement works ok sometimes but not alla times.
How to solve the problem?

    CRYPT_KEY = "1234567890123456"
    slSetProcessMods "K" + CRYPT_KEY
    IF RC <> 0 THEN
        CALL ERROR_OUT("AuxRutsA RC=" + STR$(RC))
    END IF 

Fim W
You ar right.
I am now using the name of the computer instead of IP address, works fine.

/Fim W.
Now it is working.
1 Hosts =   That's the adress of the PC where the server is running. does not work.
2 Turn of the firewall on the server. 

Thanks for the help

/Fim W.
I have to run the program as an administrator. It is not enough that my account is an administrator account. So now is its running.

Now next problem.
When i try to connect to the server using LABBET as server namne, I receive --16 Can not connect after 25 seconds.
If  i use \\LABBET as server name, i receive -16 immediately.

I will do check 2, 3 and 4.
Th SQLiteningServer.Cfg have I never changed.

Fim W.
I run as administrator. But I have heard that there is some form of Super Administrator. I'll look into it.

/Fim W.
Can anyone give a hint about where to look for the error?

/Fim W.
I have received the e-mail from subbport@libxl:
Hello Fim W
 http://www.libxl.com/ look like the perfect solution.
As I understand, Libxl creates and writes Xlib an Excel file directly without going over Excel.
And then it will go fast, probably even faster than importing a CSV file.
Hopefully you can also control whether a field should be entered as text or number,
which you can not do with a CSV file.
I have been looking for a long time after this ..