• Welcome, Guest. Please login.
 
July 07, 2020, 04:15:02 am

News:

Welcome to the SQLitening support forums!


Show posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Bern Ertl

31
It's hard to debug or troubleshoot a program when you can't see any code, so forgive me if this question covers obvious territory, but, is the UPDATE code executed from system B attempting to update the field in the database where the image is stored?  If so, why?  UPDATE only needs to update the fields that actually have new/changed data.

33
Nice work cj.  Not sure why your posts are now adjusting the font size to size=2, but I was able to read your post in the message editor after hitting the quote button.
34
SELECT BES_KVANT * PRIS / (CASE WHEN PRISKVANT > 0 THEN PRISKVANT ELSE 1 END) FROM IRA
35
General Board / Re: Forum just moved to new web host
April 04, 2018, 10:09:25 am
Thanks Paul.  That probably explains why I had to log in for the first time in forever yesterday.
36
Thanks cj. Good catch.
37
I don't think there is a way to enumerate column names in SQL syntax, so I don't think you can search and select columns with pure SQL.  You are going to have to use SQLite and/or SQLitening functions to enumerate/search the column names.
38
Code not shown, so forgive me if I'm making some assumptions, but are you opening the source file, reading a record, inserting to db, reading next record, etc.?  It is *much* more efficient to load a sequential file to a memory buffer in one go and process the records from memory.  I use function like so:FUNCTION FileToBuffer( BYVAL sFile AS STRING, sBuffer AS STRING) AS LONG

   'Returns %True if successful, %False otherwise

   LOCAL lDev AS LONG

   lDev = FREEFILE
   TRY
      OPEN sFile FOR BINARY AS #lDev
      GET$ #lDev, LOF( lDev), sBuffer
      CLOSE #lDev
   CATCH
      CLOSE #lDev
      EXIT FUNCTION
   END TRY

   FUNCTION = %True

END FUNCTION


You can then use PARSE to massage the buffer into an array of records (assuming you don't hit any memory limitations) and each record (one at a time as you process) into an array of elements.

For even better efficiency, you could forgo PARSE and just use a BYTE pointer to read the data from the buffer.  It's a bit more programming on your end, but much more efficient when executing.
39
Quote...
In most cases, UNIQUE and PRIMARY KEY constraints are implemented by creating a unique index in the database. (The exceptions are INTEGER PRIMARY KEY and PRIMARY KEYs on WITHOUT ROWID tables.) Hence, the following schemas are logically equivalent:

    CREATE TABLE t1(a, b UNIQUE);

    CREATE TABLE t1(a, b PRIMARY KEY);

    CREATE TABLE t1(a, b);
    CREATE UNIQUE INDEX t1b ON t1(b);
...


http://www.sqlite.org/lang_createtable.html  (scroll down to: SQL Data Constraints)
40
Interesting changes cj.  Thanks.
41
Damn.  My mistake... correct syntax ( http://www.sqlite.org/lang_expr.html ) is:  CAST ( expr AS type)

Try:

sql="SELECT CAST( substr( CAST( SalesCategory AS TEXT),1, 2) AS INTEGER) AS Cat from MyTable"

edit:  cj, you might try testing your /1000 code with data elements like 51999 or so to ensure there are no rounding errors.  I'm not sure how SQLite handles rounding and/or conversion of real numbers in intermediate calculations to integer final results.  The text extraction I proposed should always work regardless of SQLite's data type handling and/or potential rounding with (real number?) division of integers.
42
If the sales category is stored as an integer/numeric, you can use the following SQL to safely extract the first two digits:

SELECT CAST( substr( CAST( SalesCategory, TEXT),1, 2), INTEGER) AS CatAbbr,  ...
43
QuoteThe SQL standard specifies a huge number of keywords which may not be used as the names of tables, indices, columns, databases, user-defined functions, collations, virtual table modules, or any other named object. ...

The list below shows all possible keywords used by any build of SQLite regardless of compile-time options. ...

...
19. CAST
...


http://www.sqlite.org/lang_keywords.html
44
After some further testing, it seems like the mcat$ function only out performs JOIN$ in certain cases.  I tested with some real world data in a larger data set and JOIN$ actually performed twice as fast as mcat$.  I think I'm going to stick with JOIN$ until mcat$ or another alternative is proven more efficient for all cases.

FYI, the details:  https://forum.powerbasic.com/forum/user-to-user-discussions/powerbasic-inline-assembler/768261-alternative-to-join?p=768782#post768782
45
The biggest optimization gain should be in the BuildRowDataChunk() function, which is joining/concatenating the data inside a loop, so I think you will likely only notice a difference if you are retrieving a lot of data at a time (ie. lots of string/binary data in one row/statement).