| BLOB | TEXT | [CAST] to TEXT, ensure zero terminator
**
** )^
**
@@ -4866,7 +5532,7 @@ SQLITE_API int sqlite3_column_type(sqlite3_stmt*, int iCol);
**
** ^The sqlite3_finalize() function is called to delete a [prepared statement].
** ^If the most recent evaluation of the statement encountered no errors
-** or if the statement is never been evaluated, then sqlite3_finalize() returns
+** or if the statement has never been evaluated, then sqlite3_finalize() returns
** SQLITE_OK. ^If the most recent evaluation of statement S failed, then
** sqlite3_finalize(S) returns the appropriate [error code] or
** [extended error code].
@@ -4901,20 +5567,33 @@ SQLITE_API int sqlite3_finalize(sqlite3_stmt *pStmt);
** ^The [sqlite3_reset(S)] interface resets the [prepared statement] S
** back to the beginning of its program.
**
-** ^If the most recent call to [sqlite3_step(S)] for the
-** [prepared statement] S returned [SQLITE_ROW] or [SQLITE_DONE],
-** or if [sqlite3_step(S)] has never before been called on S,
-** then [sqlite3_reset(S)] returns [SQLITE_OK].
+** ^The return code from [sqlite3_reset(S)] indicates whether or not
+** the previous evaluation of prepared statement S completed successfully.
+** ^If [sqlite3_step(S)] has never before been called on S or if
+** [sqlite3_step(S)] has not been called since the previous call
+** to [sqlite3_reset(S)], then [sqlite3_reset(S)] will return
+** [SQLITE_OK].
**
** ^If the most recent call to [sqlite3_step(S)] for the
** [prepared statement] S indicated an error, then
** [sqlite3_reset(S)] returns an appropriate [error code].
+** ^The [sqlite3_reset(S)] interface might also return an [error code]
+** if there were no prior errors but the process of resetting
+** the prepared statement caused a new error. ^For example, if an
+** [INSERT] statement with a [RETURNING] clause is only stepped one time,
+** that one call to [sqlite3_step(S)] might return SQLITE_ROW but
+** the overall statement might still fail and the [sqlite3_reset(S)] call
+** might return SQLITE_BUSY if locking constraints prevent the
+** database change from committing. Therefore, it is important that
+** applications check the return code from [sqlite3_reset(S)] even if
+** no prior call to [sqlite3_step(S)] indicated a problem.
**
** ^The [sqlite3_reset(S)] interface does not change the values
** of any [sqlite3_bind_blob|bindings] on the [prepared statement] S.
*/
SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt);
+
/*
** CAPI3REF: Create Or Redefine SQL Functions
** KEYWORDS: {function creation routines}
@@ -4923,8 +5602,8 @@ SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt);
** ^These functions (collectively known as "function creation routines")
** are used to add SQL functions or aggregates or to redefine the behavior
** of existing SQL functions or aggregates. The only differences between
-** the three "sqlite3_create_function*" routines are the text encoding
-** expected for the second parameter (the name of the function being
+** the three "sqlite3_create_function*" routines are the text encoding
+** expected for the second parameter (the name of the function being
** created) and the presence or absence of a destructor callback for
** the application data pointer. Function sqlite3_create_window_function()
** is similar, but allows the user to supply the extra callback functions
@@ -4938,7 +5617,7 @@ SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt);
** ^The second parameter is the name of the SQL function to be created or
** redefined. ^The length of the name is limited to 255 bytes in a UTF-8
** representation, exclusive of the zero-terminator. ^Note that the name
-** length limit is in UTF-8 bytes, not characters nor UTF-16 bytes.
+** length limit is in UTF-8 bytes, not characters nor UTF-16 bytes.
** ^Any attempt to create a function with a longer name
** will result in [SQLITE_MISUSE] being returned.
**
@@ -4953,7 +5632,7 @@ SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt);
** ^The fourth parameter, eTextRep, specifies what
** [SQLITE_UTF8 | text encoding] this SQL function prefers for
** its parameters. The application should set this parameter to
-** [SQLITE_UTF16LE] if the function implementation invokes
+** [SQLITE_UTF16LE] if the function implementation invokes
** [sqlite3_value_text16le()] on an input, or [SQLITE_UTF16BE] if the
** implementation invokes [sqlite3_value_text16be()] on an input, or
** [SQLITE_UTF16] if [sqlite3_value_text16()] is used, or [SQLITE_UTF8]
@@ -4976,17 +5655,15 @@ SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt);
** within VIEWs, TRIGGERs, CHECK constraints, generated column expressions,
** index expressions, or the WHERE clause of partial indexes.
**
-**
** For best security, the [SQLITE_DIRECTONLY] flag is recommended for
** all application-defined SQL functions that do not need to be
-** used inside of triggers, view, CHECK constraints, or other elements of
-** the database schema. This flags is especially recommended for SQL
+** used inside of triggers, views, CHECK constraints, or other elements of
+** the database schema. This flag is especially recommended for SQL
** functions that have side effects or reveal internal application state.
** Without this flag, an attacker might be able to modify the schema of
** a database file to include invocations of the function with parameters
** chosen by the attacker, which the application will then execute when
** the database file is opened and read.
-**
**
** ^(The fifth parameter is an arbitrary pointer. The implementation of the
** function can gain access to this pointer using [sqlite3_user_data()].)^
@@ -5001,21 +5678,21 @@ SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt);
** SQL function or aggregate, pass NULL pointers for all three function
** callbacks.
**
-** ^The sixth, seventh, eighth and ninth parameters (xStep, xFinal, xValue
+** ^The sixth, seventh, eighth and ninth parameters (xStep, xFinal, xValue
** and xInverse) passed to sqlite3_create_window_function are pointers to
** C-language callbacks that implement the new function. xStep and xFinal
** must both be non-NULL. xValue and xInverse may either both be NULL, in
-** which case a regular aggregate function is created, or must both be
+** which case a regular aggregate function is created, or must both be
** non-NULL, in which case the new function may be used as either an aggregate
** or aggregate window function. More details regarding the implementation
-** of aggregate window functions are
+** of aggregate window functions are
** [user-defined window functions|available here].
**
** ^(If the final parameter to sqlite3_create_function_v2() or
-** sqlite3_create_window_function() is not NULL, then it is destructor for
-** the application data pointer. The destructor is invoked when the function
-** is deleted, either by being overloaded or when the database connection
-** closes.)^ ^The destructor is also invoked if the call to
+** sqlite3_create_window_function() is not NULL, then it is the destructor for
+** the application data pointer. The destructor is invoked when the function
+** is deleted, either by being overloaded or when the database connection
+** closes.)^ ^The destructor is also invoked if the call to
** sqlite3_create_function_v2() fails. ^When the destructor callback is
** invoked, it is passed a single argument which is a copy of the application
** data pointer which was the fifth parameter to sqlite3_create_function_v2().
@@ -5028,7 +5705,7 @@ SQLITE_API int sqlite3_reset(sqlite3_stmt *pStmt);
** nArg parameter is a better match than a function implementation with
** a negative nArg. ^A function where the preferred text encoding
** matches the database encoding is a better
-** match than a function where the encoding is different.
+** match than a function where the encoding is different.
** ^A function where the encoding difference is between UTF16le and UTF16be
** is a closer match than a function where the encoding difference is
** between UTF8 and UTF16.
@@ -5087,7 +5764,7 @@ SQLITE_API int sqlite3_create_window_function(
/*
** CAPI3REF: Text Encodings
**
-** These constant define integer codes that represent the various
+** These constants define integer codes that represent the various
** text encodings supported by SQLite.
*/
#define SQLITE_UTF8 1 /* IMP: R-37514-35566 */
@@ -5100,7 +5777,7 @@ SQLITE_API int sqlite3_create_window_function(
/*
** CAPI3REF: Function Flags
**
-** These constants may be ORed together with the
+** These constants may be ORed together with the
** [SQLITE_UTF8 | preferred text encoding] as the fourth argument
** to [sqlite3_create_function()], [sqlite3_create_function16()], or
** [sqlite3_create_function_v2()].
@@ -5116,16 +5793,27 @@ SQLITE_API int sqlite3_create_window_function(
** SQLite might also optimize deterministic functions by factoring them
** out of inner loops.
**
-**
+**
** [[SQLITE_DIRECTONLY]] SQLITE_DIRECTONLY
** The SQLITE_DIRECTONLY flag means that the function may only be invoked
-** from top-level SQL, and cannot be used in VIEWs or TRIGGERs nor in
+** from top-level SQL, and cannot be used in VIEWs or TRIGGERs nor in
** schema structures such as [CHECK constraints], [DEFAULT clauses],
** [expression indexes], [partial indexes], or [generated columns].
-** The SQLITE_DIRECTONLY flags is a security feature which is recommended
-** for all [application-defined SQL functions], and especially for functions
-** that have side-effects or that could potentially leak sensitive
-** information.
+**
+** The SQLITE_DIRECTONLY flag is recommended for any
+** [application-defined SQL function]
+** that has side-effects or that could potentially leak sensitive information.
+** This will prevent attacks in which an application is tricked
+** into using a database file that has had its schema surreptitiously
+** modified to invoke the application-defined function in ways that are
+** harmful.
+**
+** Some people say it is good practice to set SQLITE_DIRECTONLY on all
+** [application-defined SQL functions], regardless of whether or not they
+** are security sensitive, as doing so prevents those functions from being used
+** inside of the database schema, and thus ensures that the database
+** can be inspected and modified using generic tools (such as the [CLI])
+** that do not have access to the application-defined functions.
**
**
** [[SQLITE_INNOCUOUS]] SQLITE_INNOCUOUS
@@ -5152,13 +5840,36 @@ SQLITE_API int sqlite3_create_window_function(
**
**
** [[SQLITE_SUBTYPE]] SQLITE_SUBTYPE
-** The SQLITE_SUBTYPE flag indicates to SQLite that a function may call
+** The SQLITE_SUBTYPE flag indicates to SQLite that a function might call
** [sqlite3_value_subtype()] to inspect the sub-types of its arguments.
-** Specifying this flag makes no difference for scalar or aggregate user
-** functions. However, if it is not specified for a user-defined window
-** function, then any sub-types belonging to arguments passed to the window
-** function may be discarded before the window function is called (i.e.
-** sqlite3_value_subtype() will always return 0).
+** This flag instructs SQLite to omit some corner-case optimizations that
+** might disrupt the operation of the [sqlite3_value_subtype()] function,
+** causing it to return zero rather than the correct subtype().
+** All SQL functions that invoke [sqlite3_value_subtype()] should have this
+** property. If the SQLITE_SUBTYPE property is omitted, then the return
+** value from [sqlite3_value_subtype()] might sometimes be zero even though
+** a non-zero subtype was specified by the function argument expression.
+**
+** [[SQLITE_RESULT_SUBTYPE]] SQLITE_RESULT_SUBTYPE
+** The SQLITE_RESULT_SUBTYPE flag indicates to SQLite that a function might call
+** [sqlite3_result_subtype()] to cause a sub-type to be associated with its
+** result.
+** Every function that invokes [sqlite3_result_subtype()] should have this
+** property. If it does not, then the call to [sqlite3_result_subtype()]
+** might become a no-op if the function is used as a term in an
+** [expression index]. On the other hand, SQL functions that never invoke
+** [sqlite3_result_subtype()] should avoid setting this property, as the
+** purpose of this property is to disable certain optimizations that are
+** incompatible with subtypes.
+**
+** [[SQLITE_SELFORDER1]] SQLITE_SELFORDER1
+** The SQLITE_SELFORDER1 flag indicates that the function is an aggregate
+** that internally orders the values provided to the first argument. The
+** ordered-set aggregate SQL notation with a single ORDER BY term can be
+** used to invoke this function. If the ordered-set aggregate notation is
+** used on a function that lacks this flag, then an error is raised. Note
+** that the ordered-set aggregate syntax is only available if SQLite is
+** built using the -DSQLITE_ENABLE_ORDERED_SET_AGGREGATES compile-time option.
**
**
*/
@@ -5166,13 +5877,15 @@ SQLITE_API int sqlite3_create_window_function(
#define SQLITE_DIRECTONLY 0x000080000
#define SQLITE_SUBTYPE 0x000100000
#define SQLITE_INNOCUOUS 0x000200000
+#define SQLITE_RESULT_SUBTYPE 0x001000000
+#define SQLITE_SELFORDER1 0x002000000
/*
** CAPI3REF: Deprecated Functions
** DEPRECATED
**
** These functions are [deprecated]. In order to maintain
-** backwards compatibility with older code, these functions continue
+** backwards compatibility with older code, these functions continue
** to be supported. However, new applications should avoid
** the use of these functions. To encourage programmers to avoid
** these functions, we will not explain what they do.
@@ -5240,11 +5953,11 @@ SQLITE_API SQLITE_DEPRECATED int sqlite3_memory_alarm(void(*)(void*,sqlite3_int6
** sqlite3_value_text16be() and sqlite3_value_text16le() interfaces
** extract UTF-16 strings as big-endian and little-endian respectively.
**
-** ^If [sqlite3_value] object V was initialized
+** ^If [sqlite3_value] object V was initialized
** using [sqlite3_bind_pointer(S,I,P,X,D)] or [sqlite3_result_pointer(C,P,X,D)]
** and if X and Y are strings that compare equal according to strcmp(X,Y),
** then sqlite3_value_pointer(V,Y) will return the pointer P. ^Otherwise,
-** sqlite3_value_pointer(V,Y) returns a NULL. The sqlite3_bind_pointer()
+** sqlite3_value_pointer(V,Y) returns a NULL. The sqlite3_bind_pointer()
** routine is part of the [pointer passing interface] added for SQLite 3.20.0.
**
** ^(The sqlite3_value_type(V) interface returns the
@@ -5270,7 +5983,7 @@ SQLITE_API SQLITE_DEPRECATED int sqlite3_memory_alarm(void(*)(void*,sqlite3_int6
** sqlite3_value_nochange(X) interface returns true if and only if
** the column corresponding to X is unchanged by the UPDATE operation
** that the xUpdate method call was invoked to implement and if
-** and the prior [xColumn] method call that was invoked to extracted
+** the prior [xColumn] method call that was invoked to extract
** the value for that column returned without setting a result (probably
** because it queried [sqlite3_vtab_nochange()] and found that the column
** was unchanging). ^Within an [xUpdate] method, any value for which
@@ -5331,6 +6044,28 @@ SQLITE_API int sqlite3_value_numeric_type(sqlite3_value*);
SQLITE_API int sqlite3_value_nochange(sqlite3_value*);
SQLITE_API int sqlite3_value_frombind(sqlite3_value*);
+/*
+** CAPI3REF: Report the internal text encoding state of an sqlite3_value object
+** METHOD: sqlite3_value
+**
+** ^(The sqlite3_value_encoding(X) interface returns one of [SQLITE_UTF8],
+** [SQLITE_UTF16BE], or [SQLITE_UTF16LE] according to the current text encoding
+** of the value X, assuming that X has type TEXT.)^ If sqlite3_value_type(X)
+** returns something other than SQLITE_TEXT, then the return value from
+** sqlite3_value_encoding(X) is meaningless. ^Calls to
+** [sqlite3_value_text(X)], [sqlite3_value_text16(X)], [sqlite3_value_text16be(X)],
+** [sqlite3_value_text16le(X)], [sqlite3_value_bytes(X)], or
+** [sqlite3_value_bytes16(X)] might change the encoding of the value X and
+** thus change the return from subsequent calls to sqlite3_value_encoding(X).
+**
+** This routine is intended for used by applications that test and validate
+** the SQLite implementation. This routine is inquiring about the opaque
+** internal state of an [sqlite3_value] object. Ordinary applications should
+** not need to know what the internal state of an sqlite3_value object is and
+** hence should not need to use this interface.
+*/
+SQLITE_API int sqlite3_value_encoding(sqlite3_value*);
+
/*
** CAPI3REF: Finding The Subtype Of SQL Values
** METHOD: sqlite3_value
@@ -5340,6 +6075,12 @@ SQLITE_API int sqlite3_value_frombind(sqlite3_value*);
** information can be used to pass a limited amount of context from
** one SQL function to another. Use the [sqlite3_result_subtype()]
** routine to set the subtype for the return value of an SQL function.
+**
+** Every [application-defined SQL function] that invokes this interface
+** should include the [SQLITE_SUBTYPE] property in the text
+** encoding argument when the function is [sqlite3_create_function|registered].
+** If the [SQLITE_SUBTYPE] property is omitted, then sqlite3_value_subtype()
+** might return zero instead of the upstream subtype in some corner cases.
*/
SQLITE_API unsigned int sqlite3_value_subtype(sqlite3_value*);
@@ -5348,10 +6089,11 @@ SQLITE_API unsigned int sqlite3_value_subtype(sqlite3_value*);
** METHOD: sqlite3_value
**
** ^The sqlite3_value_dup(V) interface makes a copy of the [sqlite3_value]
-** object D and returns a pointer to that copy. ^The [sqlite3_value] returned
+** object V and returns a pointer to that copy. ^The [sqlite3_value] returned
** is a [protected sqlite3_value] object even if the input is not.
** ^The sqlite3_value_dup(V) interface returns NULL if V is NULL or if a
-** memory allocation fails.
+** memory allocation fails. ^If V is a [pointer value], then the result
+** of sqlite3_value_dup(V) is a NULL value.
**
** ^The sqlite3_value_free(V) interface frees an [sqlite3_value] object
** previously obtained from [sqlite3_value_dup()]. ^If V is a NULL pointer
@@ -5367,7 +6109,7 @@ SQLITE_API void sqlite3_value_free(sqlite3_value*);
** Implementations of aggregate SQL functions use this
** routine to allocate memory for storing their state.
**
-** ^The first time the sqlite3_aggregate_context(C,N) routine is called
+** ^The first time the sqlite3_aggregate_context(C,N) routine is called
** for a particular aggregate function, SQLite allocates
** N bytes of memory, zeroes out that memory, and returns a pointer
** to the new memory. ^On second and subsequent calls to
@@ -5380,19 +6122,19 @@ SQLITE_API void sqlite3_value_free(sqlite3_value*);
** In those cases, sqlite3_aggregate_context() might be called for the
** first time from within xFinal().)^
**
-** ^The sqlite3_aggregate_context(C,N) routine returns a NULL pointer
+** ^The sqlite3_aggregate_context(C,N) routine returns a NULL pointer
** when first called if N is less than or equal to zero or if a memory
-** allocate error occurs.
+** allocation error occurs.
**
** ^(The amount of space allocated by sqlite3_aggregate_context(C,N) is
-** determined by the N parameter on first successful call. Changing the
-** value of N in any subsequents call to sqlite3_aggregate_context() within
+** determined by the N parameter on the first successful call. Changing the
+** value of N in any subsequent call to sqlite3_aggregate_context() within
** the same aggregate function instance will not resize the memory
** allocation.)^ Within the xFinal callback, it is customary to set
-** N=0 in calls to sqlite3_aggregate_context(C,N) so that no
+** N=0 in calls to sqlite3_aggregate_context(C,N) so that no
** pointless memory allocations occur.
**
-** ^SQLite automatically frees the memory allocated by
+** ^SQLite automatically frees the memory allocated by
** sqlite3_aggregate_context() when the aggregate query concludes.
**
** The first parameter must be a copy of the
@@ -5437,48 +6179,56 @@ SQLITE_API sqlite3 *sqlite3_context_db_handle(sqlite3_context*);
** METHOD: sqlite3_context
**
** These functions may be used by (non-aggregate) SQL functions to
-** associate metadata with argument values. If the same value is passed to
-** multiple invocations of the same SQL function during query execution, under
-** some circumstances the associated metadata may be preserved. An example
-** of where this might be useful is in a regular-expression matching
-** function. The compiled version of the regular expression can be stored as
-** metadata associated with the pattern string.
+** associate auxiliary data with argument values. If the same argument
+** value is passed to multiple invocations of the same SQL function during
+** query execution, under some circumstances the associated auxiliary data
+** might be preserved. An example of where this might be useful is in a
+** regular-expression matching function. The compiled version of the regular
+** expression can be stored as auxiliary data associated with the pattern string.
** Then as long as the pattern string remains the same,
** the compiled regular expression can be reused on multiple
** invocations of the same function.
**
-** ^The sqlite3_get_auxdata(C,N) interface returns a pointer to the metadata
+** ^The sqlite3_get_auxdata(C,N) interface returns a pointer to the auxiliary data
** associated by the sqlite3_set_auxdata(C,N,P,X) function with the Nth argument
** value to the application-defined function. ^N is zero for the left-most
-** function argument. ^If there is no metadata
+** function argument. ^If there is no auxiliary data
** associated with the function argument, the sqlite3_get_auxdata(C,N) interface
** returns a NULL pointer.
**
-** ^The sqlite3_set_auxdata(C,N,P,X) interface saves P as metadata for the N-th
-** argument of the application-defined function. ^Subsequent
+** ^The sqlite3_set_auxdata(C,N,P,X) interface saves P as auxiliary data for the
+** N-th argument of the application-defined function. ^Subsequent
** calls to sqlite3_get_auxdata(C,N) return P from the most recent
-** sqlite3_set_auxdata(C,N,P,X) call if the metadata is still valid or
-** NULL if the metadata has been discarded.
+** sqlite3_set_auxdata(C,N,P,X) call if the auxiliary data is still valid or
+** NULL if the auxiliary data has been discarded.
** ^After each call to sqlite3_set_auxdata(C,N,P,X) where X is not NULL,
** SQLite will invoke the destructor function X with parameter P exactly
-** once, when the metadata is discarded.
-** SQLite is free to discard the metadata at any time, including:
+** once, when the auxiliary data is discarded.
+** SQLite is free to discard the auxiliary data at any time, including:
** - ^(when the corresponding function parameter changes)^, or
**
- ^(when [sqlite3_reset()] or [sqlite3_finalize()] is called for the
** SQL statement)^, or
**
- ^(when sqlite3_set_auxdata() is invoked again on the same
** parameter)^, or
-**
- ^(during the original sqlite3_set_auxdata() call when a memory
-** allocation error occurs.)^
+** - ^(during the original sqlite3_set_auxdata() call when a memory
+** allocation error occurs.)^
+**
- ^(during the original sqlite3_set_auxdata() call if the function
+** is evaluated during query planning instead of during query execution,
+** as sometimes happens with [SQLITE_ENABLE_STAT4].)^
**
-** Note the last bullet in particular. The destructor X in
+** Note the last two bullets in particular. The destructor X in
** sqlite3_set_auxdata(C,N,P,X) might be called immediately, before the
** sqlite3_set_auxdata() interface even returns. Hence sqlite3_set_auxdata()
** should be called near the end of the function implementation and the
** function implementation should not make any use of P after
-** sqlite3_set_auxdata() has been called.
-**
-** ^(In practice, metadata is preserved between function calls for
+** sqlite3_set_auxdata() has been called. Furthermore, a call to
+** sqlite3_get_auxdata() that occurs immediately after a corresponding call
+** to sqlite3_set_auxdata() might still return NULL if an out-of-memory
+** condition occurred during the sqlite3_set_auxdata() call or if the
+** function is being evaluated during query planning rather than during
+** query execution.
+**
+** ^(In practice, auxiliary data is preserved between function calls for
** function parameters that are compile-time constants, including literal
** values and [parameters] and expressions composed from the same.)^
**
@@ -5488,10 +6238,68 @@ SQLITE_API sqlite3 *sqlite3_context_db_handle(sqlite3_context*);
**
** These routines must be called from the same thread in which
** the SQL function is running.
+**
+** See also: [sqlite3_get_clientdata()] and [sqlite3_set_clientdata()].
*/
SQLITE_API void *sqlite3_get_auxdata(sqlite3_context*, int N);
SQLITE_API void sqlite3_set_auxdata(sqlite3_context*, int N, void*, void (*)(void*));
+/*
+** CAPI3REF: Database Connection Client Data
+** METHOD: sqlite3
+**
+** These functions are used to associate one or more named pointers
+** with a [database connection].
+** A call to sqlite3_set_clientdata(D,N,P,X) causes the pointer P
+** to be attached to [database connection] D using name N. Subsequent
+** calls to sqlite3_get_clientdata(D,N) will return a copy of pointer P
+** or a NULL pointer if there were no prior calls to
+** sqlite3_set_clientdata() with the same values of D and N.
+** Names are compared using strcmp() and are thus case sensitive.
+** It returns 0 on success and SQLITE_NOMEM on allocation failure.
+**
+** If P and X are both non-NULL, then the destructor X is invoked with
+** argument P on the first of the following occurrences:
+**
+** - An out-of-memory error occurs during the call to
+** sqlite3_set_clientdata() which attempts to register pointer P.
+**
- A subsequent call to sqlite3_set_clientdata(D,N,P,X) is made
+** with the same D and N parameters.
+**
- The database connection closes. SQLite does not make any guarantees
+** about the order in which destructors are called, only that all
+** destructors will be called exactly once at some point during the
+** database connection closing process.
+**
+**
+** SQLite does not do anything with client data other than invoke
+** destructors on the client data at the appropriate time. The intended
+** use for client data is to provide a mechanism for wrapper libraries
+** to store additional information about an SQLite database connection.
+**
+** There is no limit (other than available memory) on the number of different
+** client data pointers (with different names) that can be attached to a
+** single database connection. However, the implementation is optimized
+** for the case of having only one or two different client data names.
+** Applications and wrapper libraries are discouraged from using more than
+** one client data name each.
+**
+** There is no way to enumerate the client data pointers
+** associated with a database connection. The N parameter can be thought
+** of as a secret key such that only code that knows the secret key is able
+** to access the associated data.
+**
+** Security Warning: These interfaces should not be exposed in scripting
+** languages or in other circumstances where it might be possible for an
+** attacker to invoke them. Any agent that can invoke these interfaces
+** can probably also take control of the process.
+**
+** Database connection client data is only available for SQLite
+** version 3.44.0 ([dateof:3.44.0]) and later.
+**
+** See also: [sqlite3_set_auxdata()] and [sqlite3_get_auxdata()].
+*/
+SQLITE_API void *sqlite3_get_clientdata(sqlite3*,const char*);
+SQLITE_API int sqlite3_set_clientdata(sqlite3*, const char*, void*, void(*)(void*));
/*
** CAPI3REF: Constants Defining Special Destructor Behavior
@@ -5543,8 +6351,9 @@ typedef void (*sqlite3_destructor_type)(void*);
** 2nd parameter of sqlite3_result_error() or sqlite3_result_error16()
** as the text of an error message. ^SQLite interprets the error
** message string from sqlite3_result_error() as UTF-8. ^SQLite
-** interprets the string from sqlite3_result_error16() as UTF-16 in native
-** byte order. ^If the third parameter to sqlite3_result_error()
+** interprets the string from sqlite3_result_error16() as UTF-16 using
+** the same [byte-order determination rules] as [sqlite3_bind_text16()].
+** ^If the third parameter to sqlite3_result_error()
** or sqlite3_result_error16() is negative then SQLite takes as the error
** message all text up through the first zero character.
** ^If the third parameter to sqlite3_result_error() or
@@ -5586,15 +6395,16 @@ typedef void (*sqlite3_destructor_type)(void*);
** of [SQLITE_UTF8], [SQLITE_UTF16], [SQLITE_UTF16BE], or [SQLITE_UTF16LE].
** ^SQLite takes the text result from the application from
** the 2nd parameter of the sqlite3_result_text* interfaces.
-** ^If the 3rd parameter to the sqlite3_result_text* interfaces
-** is negative, then SQLite takes result text from the 2nd parameter
-** through the first zero character.
+** ^If the 3rd parameter to any of the sqlite3_result_text* interfaces
+** other than sqlite3_result_text64() is negative, then SQLite computes
+** the string length itself by searching the 2nd parameter for the first
+** zero character.
** ^If the 3rd parameter to the sqlite3_result_text* interfaces
** is non-negative, then as many bytes (not characters) of the text
** pointed to by the 2nd parameter are taken as the application-defined
** function result. If the 3rd parameter is non-negative, then it
** must be the byte offset into the string where the NUL terminator would
-** appear if the string where NUL terminated. If any NUL characters occur
+** appear if the string were NUL terminated. If any NUL characters occur
** in the string at a byte offset that is less than the value of the 3rd
** parameter, then the resulting string will contain embedded NULs and the
** result of expressions operating on strings with embedded NULs is undefined.
@@ -5612,6 +6422,25 @@ typedef void (*sqlite3_destructor_type)(void*);
** then SQLite makes a copy of the result into space obtained
** from [sqlite3_malloc()] before it returns.
**
+** ^For the sqlite3_result_text16(), sqlite3_result_text16le(), and
+** sqlite3_result_text16be() routines, and for sqlite3_result_text64()
+** when the encoding is not UTF8, if the input UTF16 begins with a
+** byte-order mark (BOM, U+FEFF) then the BOM is removed from the
+** string and the rest of the string is interpreted according to the
+** byte-order specified by the BOM. ^The byte-order specified by
+** the BOM at the beginning of the text overrides the byte-order
+** specified by the interface procedure. ^So, for example, if
+** sqlite3_result_text16le() is invoked with text that begins
+** with bytes 0xfe, 0xff (a big-endian byte-order mark) then the
+** first two bytes of input are skipped and the remaining input
+** is interpreted as UTF16BE text.
+**
+** ^For UTF16 input text to the sqlite3_result_text16(),
+** sqlite3_result_text16be(), sqlite3_result_text16le(), and
+** sqlite3_result_text64() routines, if the text contains invalid
+** UTF16 characters, the invalid characters might be converted
+** into the unicode replacement character, U+FFFD.
+**
** ^The sqlite3_result_value() interface sets the result of
** the application-defined function to be a copy of the
** [unprotected sqlite3_value] object specified by the 2nd parameter. ^The
@@ -5624,7 +6453,7 @@ typedef void (*sqlite3_destructor_type)(void*);
**
** ^The sqlite3_result_pointer(C,P,T,D) interface sets the result to an
** SQL NULL value, just like [sqlite3_result_null(C)], except that it
-** also associates the host-language pointer P or type T with that
+** also associates the host-language pointer P or type T with that
** NULL value such that the pointer can be retrieved within an
** [application-defined SQL function] using [sqlite3_value_pointer()].
** ^If the D parameter is not NULL, then it is a pointer to a destructor
@@ -5633,7 +6462,7 @@ typedef void (*sqlite3_destructor_type)(void*);
** string and preferably a string literal. The sqlite3_result_pointer()
** routine is part of the [pointer passing interface] added for SQLite 3.20.0.
**
-** If these routines are called from within the different thread
+** If these routines are called from within a different thread
** than the one containing the application-defined function that received
** the [sqlite3_context] pointer, the results are undefined.
*/
@@ -5666,12 +6495,26 @@ SQLITE_API int sqlite3_result_zeroblob64(sqlite3_context*, sqlite3_uint64 n);
** METHOD: sqlite3_context
**
** The sqlite3_result_subtype(C,T) function causes the subtype of
-** the result from the [application-defined SQL function] with
-** [sqlite3_context] C to be the value T. Only the lower 8 bits
+** the result from the [application-defined SQL function] with
+** [sqlite3_context] C to be the value T. Only the lower 8 bits
** of the subtype T are preserved in current versions of SQLite;
** higher order bits are discarded.
** The number of subtype bytes preserved by SQLite might increase
** in future releases of SQLite.
+**
+** Every [application-defined SQL function] that invokes this interface
+** should include the [SQLITE_RESULT_SUBTYPE] property in its
+** text encoding argument when the SQL function is
+** [sqlite3_create_function|registered]. If the [SQLITE_RESULT_SUBTYPE]
+** property is omitted from the function that invokes sqlite3_result_subtype(),
+** then in some cases the sqlite3_result_subtype() might fail to set
+** the result subtype.
+**
+** If SQLite is compiled with -DSQLITE_STRICT_SUBTYPE=1, then any
+** SQL function that invokes the sqlite3_result_subtype() interface
+** and that does not have the SQLITE_RESULT_SUBTYPE property will raise
+** an error. Future versions of SQLite might enable -DSQLITE_STRICT_SUBTYPE=1
+** by default.
*/
SQLITE_API void sqlite3_result_subtype(sqlite3_context*,unsigned int);
@@ -5714,7 +6557,7 @@ SQLITE_API void sqlite3_result_subtype(sqlite3_context*,unsigned int);
** deleted. ^When all collating functions having the same name are deleted,
** that collation is no longer usable.
**
-** ^The collating function callback is invoked with a copy of the pArg
+** ^The collating function callback is invoked with a copy of the pArg
** application data pointer and with two strings in the encoding specified
** by the eTextRep argument. The two integer parameters to the collating
** function callback are the length of the two strings, in bytes. The collating
@@ -5745,36 +6588,36 @@ SQLITE_API void sqlite3_result_subtype(sqlite3_context*,unsigned int);
** calls to the collation creation functions or when the
** [database connection] is closed using [sqlite3_close()].
**
-** ^The xDestroy callback is not called if the
+** ^The xDestroy callback is not called if the
** sqlite3_create_collation_v2() function fails. Applications that invoke
-** sqlite3_create_collation_v2() with a non-NULL xDestroy argument should
+** sqlite3_create_collation_v2() with a non-NULL xDestroy argument should
** check the return code and dispose of the application data pointer
** themselves rather than expecting SQLite to deal with it for them.
-** This is different from every other SQLite interface. The inconsistency
-** is unfortunate but cannot be changed without breaking backwards
+** This is different from every other SQLite interface. The inconsistency
+** is unfortunate but cannot be changed without breaking backwards
** compatibility.
**
** See also: [sqlite3_collation_needed()] and [sqlite3_collation_needed16()].
*/
SQLITE_API int sqlite3_create_collation(
- sqlite3*,
- const char *zName,
- int eTextRep,
+ sqlite3*,
+ const char *zName,
+ int eTextRep,
void *pArg,
int(*xCompare)(void*,int,const void*,int,const void*)
);
SQLITE_API int sqlite3_create_collation_v2(
- sqlite3*,
- const char *zName,
- int eTextRep,
+ sqlite3*,
+ const char *zName,
+ int eTextRep,
void *pArg,
int(*xCompare)(void*,int,const void*,int,const void*),
void(*xDestroy)(void*)
);
SQLITE_API int sqlite3_create_collation16(
- sqlite3*,
+ sqlite3*,
const void *zName,
- int eTextRep,
+ int eTextRep,
void *pArg,
int(*xCompare)(void*,int,const void*,int,const void*)
);
@@ -5807,64 +6650,19 @@ SQLITE_API int sqlite3_create_collation16(
** [sqlite3_create_collation_v2()].
*/
SQLITE_API int sqlite3_collation_needed(
- sqlite3*,
- void*,
+ sqlite3*,
+ void*,
void(*)(void*,sqlite3*,int eTextRep,const char*)
);
SQLITE_API int sqlite3_collation_needed16(
- sqlite3*,
+ sqlite3*,
void*,
void(*)(void*,sqlite3*,int eTextRep,const void*)
);
-#ifdef SQLITE_HAS_CODEC
-/*
-** Specify the key for an encrypted database. This routine should be
-** called right after sqlite3_open().
-**
-** The code to implement this API is not available in the public release
-** of SQLite.
-*/
-SQLITE_API int sqlite3_key(
- sqlite3 *db, /* Database to be rekeyed */
- const void *pKey, int nKey /* The key */
-);
-SQLITE_API int sqlite3_key_v2(
- sqlite3 *db, /* Database to be rekeyed */
- const char *zDbName, /* Name of the database */
- const void *pKey, int nKey /* The key */
-);
-
-/*
-** Change the key on an open database. If the current database is not
-** encrypted, this routine will encrypt it. If pNew==0 or nNew==0, the
-** database is decrypted.
-**
-** The code to implement this API is not available in the public release
-** of SQLite.
-*/
-SQLITE_API int sqlite3_rekey(
- sqlite3 *db, /* Database to be rekeyed */
- const void *pKey, int nKey /* The new key */
-);
-SQLITE_API int sqlite3_rekey_v2(
- sqlite3 *db, /* Database to be rekeyed */
- const char *zDbName, /* Name of the database */
- const void *pKey, int nKey /* The new key */
-);
-
-/*
-** Specify the activation key for a SEE database. Unless
-** activated, none of the SEE routines will work.
-*/
-SQLITE_API void sqlite3_activate_see(
- const char *zPassPhrase /* Activation phrase */
-);
-#endif
-
#ifdef SQLITE_ENABLE_CEROD
/*
-** Specify the activation key for a CEROD database. Unless
+** Specify the activation key for a CEROD database. Unless
** activated, none of the CEROD routines will work.
*/
SQLITE_API void sqlite3_activate_cerod(
@@ -5888,6 +6686,13 @@ SQLITE_API void sqlite3_activate_cerod(
** of the default VFS is not implemented correctly, or not implemented at
** all, then the behavior of sqlite3_sleep() may deviate from the description
** in the previous paragraphs.
+**
+** If a negative argument is passed to sqlite3_sleep() the results vary by
+** VFS and operating system. Some system treat a negative argument as an
+** instruction to sleep forever. Others understand it to mean do not sleep
+** at all. ^In SQLite version 3.42.0 and later, a negative
+** argument passed into sqlite3_sleep() is changed to zero before it is relayed
+** down into the xSleep method of the VFS.
*/
SQLITE_API int sqlite3_sleep(int);
@@ -5920,7 +6725,7 @@ SQLITE_API int sqlite3_sleep(int);
** ^The [temp_store_directory pragma] may modify this variable and cause
** it to point to memory obtained from [sqlite3_malloc]. ^Furthermore,
** the [temp_store_directory pragma] always assumes that any string
-** that this variable points to is held in memory obtained from
+** that this variable points to is held in memory obtained from
** [sqlite3_malloc] and the pragma may attempt to free that memory
** using [sqlite3_free].
** Hence, if this variable is modified directly, either it should be
@@ -5977,7 +6782,7 @@ SQLITE_API SQLITE_EXTERN char *sqlite3_temp_directory;
** ^The [data_store_directory pragma] may modify this variable and cause
** it to point to memory obtained from [sqlite3_malloc]. ^Furthermore,
** the [data_store_directory pragma] always assumes that any string
-** that this variable points to is held in memory obtained from
+** that this variable points to is held in memory obtained from
** [sqlite3_malloc] and the pragma may attempt to free that memory
** using [sqlite3_free].
** Hence, if this variable is modified directly, either it should be
@@ -6058,6 +6863,28 @@ SQLITE_API int sqlite3_get_autocommit(sqlite3*);
*/
SQLITE_API sqlite3 *sqlite3_db_handle(sqlite3_stmt*);
+/*
+** CAPI3REF: Return The Schema Name For A Database Connection
+** METHOD: sqlite3
+**
+** ^The sqlite3_db_name(D,N) interface returns a pointer to the schema name
+** for the N-th database on database connection D, or a NULL pointer if N is
+** out of range. An N value of 0 means the main database file. An N of 1 is
+** the "temp" schema. Larger values of N correspond to various ATTACH-ed
+** databases.
+**
+** Space to hold the string that is returned by sqlite3_db_name() is managed
+** by SQLite itself. The string might be deallocated by any operation that
+** changes the schema, including [ATTACH] or [DETACH] or calls to
+** [sqlite3_serialize()] or [sqlite3_deserialize()], even operations that
+** occur on a different thread. Applications that need to
+** remember the string long-term should make their own copy. Applications that
+** are accessing the same database connection simultaneously on multiple
+** threads should mutex-protect calls to this API and should make their own
+** private copy of the result prior to releasing the mutex.
+*/
+SQLITE_API const char *sqlite3_db_name(sqlite3 *db, int N);
+
/*
** CAPI3REF: Return The Filename For A Database Connection
** METHOD: sqlite3
@@ -6088,7 +6915,7 @@ SQLITE_API sqlite3 *sqlite3_db_handle(sqlite3_stmt*);
** [sqlite3_filename_wal()]
**
*/
-SQLITE_API const char *sqlite3_db_filename(sqlite3 *db, const char *zDbName);
+SQLITE_API sqlite3_filename sqlite3_db_filename(sqlite3 *db, const char *zDbName);
/*
** CAPI3REF: Determine if a database is read-only
@@ -6100,6 +6927,57 @@ SQLITE_API const char *sqlite3_db_filename(sqlite3 *db, const char *zDbName);
*/
SQLITE_API int sqlite3_db_readonly(sqlite3 *db, const char *zDbName);
+/*
+** CAPI3REF: Determine the transaction state of a database
+** METHOD: sqlite3
+**
+** ^The sqlite3_txn_state(D,S) interface returns the current
+** [transaction state] of schema S in database connection D. ^If S is NULL,
+** then the highest transaction state of any schema on database connection D
+** is returned. Transaction states are (in order of lowest to highest):
+**
+** - SQLITE_TXN_NONE
+**
- SQLITE_TXN_READ
+**
- SQLITE_TXN_WRITE
+**
+** ^If the S argument to sqlite3_txn_state(D,S) is not the name of
+** a valid schema, then -1 is returned.
+*/
+SQLITE_API int sqlite3_txn_state(sqlite3*,const char *zSchema);
+
+/*
+** CAPI3REF: Allowed return values from sqlite3_txn_state()
+** KEYWORDS: {transaction state}
+**
+** These constants define the current transaction state of a database file.
+** ^The [sqlite3_txn_state(D,S)] interface returns one of these
+** constants in order to describe the transaction state of schema S
+** in [database connection] D.
+**
+**
+** [[SQLITE_TXN_NONE]] - SQLITE_TXN_NONE
+** - The SQLITE_TXN_NONE state means that no transaction is currently
+** pending.
+**
+** [[SQLITE_TXN_READ]] - SQLITE_TXN_READ
+** - The SQLITE_TXN_READ state means that the database is currently
+** in a read transaction. Content has been read from the database file
+** but nothing in the database file has changed. The transaction state
+** will be advanced to SQLITE_TXN_WRITE if any changes occur and there are
+** no other conflicting concurrent write transactions. The transaction
+** state will revert to SQLITE_TXN_NONE following a [ROLLBACK] or
+** [COMMIT].
+**
+** [[SQLITE_TXN_WRITE]] - SQLITE_TXN_WRITE
+** - The SQLITE_TXN_WRITE state means that the database is currently
+** in a write transaction. Content has been written to the database file
+** but has not yet committed. The transaction state will change to
+** SQLITE_TXN_NONE at the next [ROLLBACK] or [COMMIT].
+*/
+#define SQLITE_TXN_NONE 0
+#define SQLITE_TXN_READ 1
+#define SQLITE_TXN_WRITE 2
+
/*
** CAPI3REF: Find the next prepared statement
** METHOD: sqlite3
@@ -6166,6 +7044,72 @@ SQLITE_API sqlite3_stmt *sqlite3_next_stmt(sqlite3 *pDb, sqlite3_stmt *pStmt);
SQLITE_API void *sqlite3_commit_hook(sqlite3*, int(*)(void*), void*);
SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*);
+/*
+** CAPI3REF: Autovacuum Compaction Amount Callback
+** METHOD: sqlite3
+**
+** ^The sqlite3_autovacuum_pages(D,C,P,X) interface registers a callback
+** function C that is invoked prior to each autovacuum of the database
+** file. ^The callback is passed a copy of the generic data pointer (P),
+** the schema-name of the attached database that is being autovacuumed,
+** the size of the database file in pages, the number of free pages,
+** and the number of bytes per page, respectively. The callback should
+** return the number of free pages that should be removed by the
+** autovacuum. ^If the callback returns zero, then no autovacuum happens.
+** ^If the value returned is greater than or equal to the number of
+** free pages, then a complete autovacuum happens.
+**
+** ^If there are multiple ATTACH-ed database files that are being
+** modified as part of a transaction commit, then the autovacuum pages
+** callback is invoked separately for each file.
+**
+** The callback is not reentrant. The callback function should
+** not attempt to invoke any other SQLite interface. If it does, bad
+** things may happen, including segmentation faults and corrupt database
+** files. The callback function should be a simple function that
+** does some arithmetic on its input parameters and returns a result.
+**
+** ^The X parameter to sqlite3_autovacuum_pages(D,C,P,X) is an optional
+** destructor for the P parameter. ^If X is not NULL, then X(P) is
+** invoked whenever the database connection closes or when the callback
+** is overwritten by another invocation of sqlite3_autovacuum_pages().
+**
+** ^There is only one autovacuum pages callback per database connection.
+** ^Each call to the sqlite3_autovacuum_pages() interface overrides all
+** previous invocations for that database connection. ^If the callback
+** argument (C) to sqlite3_autovacuum_pages(D,C,P,X) is a NULL pointer,
+** then the autovacuum steps callback is canceled. The return value
+** from sqlite3_autovacuum_pages() is normally SQLITE_OK, but might
+** be some other error code if something goes wrong. The current
+** implementation will only return SQLITE_OK or SQLITE_MISUSE, but other
+** return codes might be added in future releases.
+**
+** If no autovacuum pages callback is specified (the usual case) or
+** a NULL pointer is provided for the callback,
+** then the default behavior is to vacuum all free pages. So, in other
+** words, the default behavior is the same as if the callback function
+** were something like this:
+**
+**
+** unsigned int demonstration_autovac_pages_callback(
+** void *pClientData,
+** const char *zSchema,
+** unsigned int nDbPage,
+** unsigned int nFreePage,
+** unsigned int nBytePerPage
+** ){
+** return nFreePage;
+** }
+**
+*/
+SQLITE_API int sqlite3_autovacuum_pages(
+ sqlite3 *db,
+ unsigned int(*)(void*,const char*,unsigned int,unsigned int,unsigned int),
+ void*,
+ void(*)(void*)
+);
+
+
/*
** CAPI3REF: Data Change Notification Callbacks
** METHOD: sqlite3
@@ -6179,6 +7123,8 @@ SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*);
**
** ^The second argument is a pointer to the function to invoke when a
** row is updated, inserted or deleted in a rowid table.
+** ^The update hook is disabled by invoking sqlite3_update_hook()
+** with a NULL pointer as the second parameter.
** ^The first argument to the callback is a copy of the third argument
** to sqlite3_update_hook().
** ^The second callback argument is one of [SQLITE_INSERT], [SQLITE_DELETE],
@@ -6190,7 +7136,7 @@ SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*);
** ^In the case of an update, this is the [rowid] after the update takes place.
**
** ^(The update hook is not invoked when internal system tables are
-** modified (i.e. sqlite_master and sqlite_sequence).)^
+** modified (i.e. sqlite_sequence).)^
** ^The update hook is not invoked when [WITHOUT ROWID] tables are modified.
**
** ^In the current implementation, the update hook
@@ -6200,6 +7146,12 @@ SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*);
** The exceptions defined in this paragraph might change in a future
** release of SQLite.
**
+** Whether the update hook is invoked before or after the
+** corresponding change is currently unspecified and may differ
+** depending on the type of change. Do not rely on the order of the
+** hook call with regards to the final result of the operation which
+** triggers the hook.
+**
** The update hook implementation must not do anything that will modify
** the database connection that invoked the update hook. Any actions
** to modify the database connection must be deferred until after the
@@ -6216,7 +7168,7 @@ SQLITE_API void *sqlite3_rollback_hook(sqlite3*, void(*)(void *), void*);
** and [sqlite3_preupdate_hook()] interfaces.
*/
SQLITE_API void *sqlite3_update_hook(
- sqlite3*,
+ sqlite3*,
void(*)(void *,int ,char const *,char const *,sqlite3_int64),
void*
);
@@ -6229,8 +7181,13 @@ SQLITE_API void *sqlite3_update_hook(
** to the same database. Sharing is enabled if the argument is true
** and disabled if the argument is false.)^
**
+** This interface is omitted if SQLite is compiled with
+** [-DSQLITE_OMIT_SHARED_CACHE]. The [-DSQLITE_OMIT_SHARED_CACHE]
+** compile-time option is recommended because the
+** [use of shared cache mode is discouraged].
+**
** ^Cache sharing is enabled and disabled for an entire process.
-** This is a change as of SQLite [version 3.5.0] ([dateof:3.5.0]).
+** This is a change as of SQLite [version 3.5.0] ([dateof:3.5.0]).
** In prior versions of SQLite,
** sharing was enabled or disabled for each thread separately.
**
@@ -6251,8 +7208,8 @@ SQLITE_API void *sqlite3_update_hook(
** with the [SQLITE_OPEN_SHAREDCACHE] flag.
**
** Note: This method is disabled on MacOS X 10.7 and iOS version 5.0
-** and will always return SQLITE_MISUSE. On those systems,
-** shared cache mode should be enabled per-database connection via
+** and will always return SQLITE_MISUSE. On those systems,
+** shared cache mode should be enabled per-database connection via
** [sqlite3_open_v2()] with [SQLITE_OPEN_SHAREDCACHE].
**
** This interface is threadsafe on processors where writing a
@@ -6296,7 +7253,7 @@ SQLITE_API int sqlite3_db_release_memory(sqlite3*);
** CAPI3REF: Impose A Limit On Heap Size
**
** These interfaces impose limits on the amount of heap memory that will be
-** by all database connections within a single process.
+** used by all database connections within a single process.
**
** ^The sqlite3_soft_heap_limit64() interface sets and/or queries the
** soft limit on the amount of heap memory that may be allocated by SQLite.
@@ -6305,7 +7262,7 @@ SQLITE_API int sqlite3_db_release_memory(sqlite3*);
** as heap memory usages approaches the limit.
** ^The soft heap limit is "soft" because even though SQLite strives to stay
** below the limit, it will exceed the limit rather than generate
-** an [SQLITE_NOMEM] error. In other words, the soft heap limit
+** an [SQLITE_NOMEM] error. In other words, the soft heap limit
** is advisory only.
**
** ^The sqlite3_hard_heap_limit64(N) interface sets a hard upper bound of
@@ -6327,7 +7284,7 @@ SQLITE_API int sqlite3_db_release_memory(sqlite3*);
** ^The soft heap limit may not be greater than the hard heap limit.
** ^If the hard heap limit is enabled and if sqlite3_soft_heap_limit(N)
** is invoked with a value of N that is greater than the hard heap limit,
-** the the soft heap limit is set to the value of the hard heap limit.
+** the soft heap limit is set to the value of the hard heap limit.
** ^The soft heap limit is automatically enabled whenever the hard heap
** limit is enabled. ^When sqlite3_hard_heap_limit64(N) is invoked and
** the soft heap limit is outside the range of 1..N, then the soft heap
@@ -6354,7 +7311,7 @@ SQLITE_API int sqlite3_db_release_memory(sqlite3*);
** )^
**
** The circumstances under which SQLite will enforce the heap limits may
-** changes in future releases of SQLite.
+** change in future releases of SQLite.
*/
SQLITE_API sqlite3_int64 sqlite3_soft_heap_limit64(sqlite3_int64 N);
SQLITE_API sqlite3_int64 sqlite3_hard_heap_limit64(sqlite3_int64 N);
@@ -6421,7 +7378,7 @@ SQLITE_API SQLITE_DEPRECATED void sqlite3_soft_heap_limit(int N);
**
** ^If the specified table is actually a view, an [error code] is returned.
**
-** ^If the specified column is "rowid", "oid" or "_rowid_" and the table
+** ^If the specified column is "rowid", "oid" or "_rowid_" and the table
** is not a [WITHOUT ROWID] table and an
** [INTEGER PRIMARY KEY] column has been explicitly declared, then the output
** parameters are set for the explicitly declared column. ^(If there is no
@@ -6469,8 +7426,8 @@ SQLITE_API int sqlite3_table_column_metadata(
** ^The entry point is zProc.
** ^(zProc may be 0, in which case SQLite will try to come up with an
** entry point name on its own. It first tries "sqlite3_extension_init".
-** If that does not work, it constructs a name "sqlite3_X_init" where the
-** X is consists of the lower-case equivalent of all ASCII alphabetic
+** If that does not work, it constructs a name "sqlite3_X_init" where
+** X consists of the lower-case equivalent of all ASCII alphabetic
** characters in the filename from the last "/" to the first following
** "." and omitting any initial "lib".)^
** ^The sqlite3_load_extension() interface returns
@@ -6487,7 +7444,7 @@ SQLITE_API int sqlite3_table_column_metadata(
** prior to calling this API,
** otherwise an error will be returned.
**
-** Security warning: It is recommended that the
+** Security warning: It is recommended that the
** [SQLITE_DBCONFIG_ENABLE_LOAD_EXTENSION] method be used to enable only this
** interface. The use of the [sqlite3_enable_load_extension()] interface
** should be avoided. This will keep the SQL function [load_extension()]
@@ -6541,7 +7498,7 @@ SQLITE_API int sqlite3_enable_load_extension(sqlite3 *db, int onoff);
** ^(Even though the function prototype shows that xEntryPoint() takes
** no arguments and returns void, SQLite invokes xEntryPoint() with three
** arguments and expects an integer result as if the signature of the
-** entry point where as follows:
+** entry point were as follows:
**
**
** int xEntryPoint(
@@ -6574,7 +7531,7 @@ SQLITE_API int sqlite3_auto_extension(void(*xEntryPoint)(void));
** ^The [sqlite3_cancel_auto_extension(X)] interface unregisters the
** initialization routine X that was registered using a prior call to
** [sqlite3_auto_extension(X)]. ^The [sqlite3_cancel_auto_extension(X)]
-** routine returns 1 if initialization routine X was successfully
+** routine returns 1 if initialization routine X was successfully
** unregistered and it returns 0 if X was not on the list of initialization
** routines.
*/
@@ -6588,15 +7545,6 @@ SQLITE_API int sqlite3_cancel_auto_extension(void(*xEntryPoint)(void));
*/
SQLITE_API void sqlite3_reset_auto_extension(void);
-/*
-** The interface to the virtual-table mechanism is currently considered
-** to be experimental. The interface might change in incompatible ways.
-** If this is a problem for you, do not use the interface at this time.
-**
-** When the virtual-table mechanism stabilizes, we will declare the
-** interface fixed, support it indefinitely, and remove this comment.
-*/
-
/*
** Structures used by the virtual table interface
*/
@@ -6609,8 +7557,8 @@ typedef struct sqlite3_module sqlite3_module;
** CAPI3REF: Virtual Table Object
** KEYWORDS: sqlite3_module {virtual table module}
**
-** This structure, sometimes called a "virtual table module",
-** defines the implementation of a [virtual table].
+** This structure, sometimes called a "virtual table module",
+** defines the implementation of a [virtual table].
** This structure consists mostly of methods for the module.
**
** ^A virtual table module is created by filling in a persistent
@@ -6649,7 +7597,7 @@ struct sqlite3_module {
void (**pxFunc)(sqlite3_context*,int,sqlite3_value**),
void **ppArg);
int (*xRename)(sqlite3_vtab *pVtab, const char *zNew);
- /* The methods above are in version 1 of the sqlite_module object. Those
+ /* The methods above are in version 1 of the sqlite_module object. Those
** below are for version 2 and greater. */
int (*xSavepoint)(sqlite3_vtab *pVTab, int);
int (*xRelease)(sqlite3_vtab *pVTab, int);
@@ -6657,6 +7605,10 @@ struct sqlite3_module {
/* The methods above are in versions 1 and 2 of the sqlite_module object.
** Those below are for version 3 and greater. */
int (*xShadowName)(const char*);
+ /* The methods above are in versions 1 through 3 of the sqlite_module object.
+ ** Those below are for version 4 and greater. */
+ int (*xIntegrity)(sqlite3_vtab *pVTab, const char *zSchema,
+ const char *zTabName, int mFlags, char **pzErr);
};
/*
@@ -6699,7 +7651,7 @@ struct sqlite3_module {
** required by SQLite. If the table has at least 64 columns and any column
** to the right of the first 63 is required, then bit 63 of colUsed is also
** set. In other words, column iCol may be required if the expression
-** (colUsed & ((sqlite3_uint64)1 << (iCol>=63 ? 63 : iCol))) evaluates to
+** (colUsed & ((sqlite3_uint64)1 << (iCol>=63 ? 63 : iCol))) evaluates to
** non-zero.
**
** The [xBestIndex] method must fill aConstraintUsage[] with information
@@ -6710,15 +7662,15 @@ struct sqlite3_module {
** virtual table and might not be checked again by the byte code.)^ ^(The
** aConstraintUsage[].omit flag is an optimization hint. When the omit flag
** is left in its default setting of false, the constraint will always be
-** checked separately in byte code. If the omit flag is change to true, then
+** checked separately in byte code. If the omit flag is changed to true, then
** the constraint may or may not be checked in byte code. In other words,
** when the omit flag is true there is no guarantee that the constraint will
** not be checked again using byte code.)^
**
-** ^The idxNum and idxPtr values are recorded and passed into the
+** ^The idxNum and idxStr values are recorded and passed into the
** [xFilter] method.
-** ^[sqlite3_free()] is used to free idxPtr if and only if
-** needToFreeIdxPtr is true.
+** ^[sqlite3_free()] is used to free idxStr if and only if
+** needToFreeIdxStr is true.
**
** ^The orderByConsumed means that output from [xFilter]/[xNext] will occur in
** the correct order to satisfy the ORDER BY clause so that no separate
@@ -6726,17 +7678,19 @@ struct sqlite3_module {
**
** ^The estimatedCost value is an estimate of the cost of a particular
** strategy. A cost of N indicates that the cost of the strategy is similar
-** to a linear scan of an SQLite table with N rows. A cost of log(N)
+** to a linear scan of an SQLite table with N rows. A cost of log(N)
** indicates that the expense of the operation is similar to that of a
** binary search on a unique indexed field of an SQLite table with N rows.
**
** ^The estimatedRows value is an estimate of the number of rows that
** will be returned by the strategy.
**
-** The xBestIndex method may optionally populate the idxFlags field with a
-** mask of SQLITE_INDEX_SCAN_* flags. Currently there is only one such flag -
-** SQLITE_INDEX_SCAN_UNIQUE. If the xBestIndex method sets this flag, SQLite
-** assumes that the strategy may visit at most one row.
+** The xBestIndex method may optionally populate the idxFlags field with a
+** mask of SQLITE_INDEX_SCAN_* flags. One such flag is
+** [SQLITE_INDEX_SCAN_HEX], which if set causes the [EXPLAIN QUERY PLAN]
+** output to show the idxNum as hex instead of as decimal. Another flag is
+** SQLITE_INDEX_SCAN_UNIQUE, which if set indicates that the query plan will
+** return at most one row.
**
** Additionally, if xBestIndex sets the SQLITE_INDEX_SCAN_UNIQUE flag, then
** SQLite also assumes that if a call to the xUpdate() method is made as
@@ -6749,14 +7703,14 @@ struct sqlite3_module {
** the xUpdate method are automatically rolled back by SQLite.
**
** IMPORTANT: The estimatedRows field was added to the sqlite3_index_info
-** structure for SQLite [version 3.8.2] ([dateof:3.8.2]).
+** structure for SQLite [version 3.8.2] ([dateof:3.8.2]).
** If a virtual table extension is
-** used with an SQLite version earlier than 3.8.2, the results of attempting
-** to read or write the estimatedRows field are undefined (but are likely
+** used with an SQLite version earlier than 3.8.2, the results of attempting
+** to read or write the estimatedRows field are undefined (but are likely
** to include crashing the application). The estimatedRows field should
** therefore only be used if [sqlite3_libversion_number()] returns a
** value greater than or equal to 3008002. Similarly, the idxFlags field
-** was added for [version 3.9.0] ([dateof:3.9.0]).
+** was added for [version 3.9.0] ([dateof:3.9.0]).
** It may therefore only be used if
** sqlite3_libversion_number() returns a value greater than or equal to
** 3009000.
@@ -6796,35 +7750,69 @@ struct sqlite3_index_info {
/*
** CAPI3REF: Virtual Table Scan Flags
**
-** Virtual table implementations are allowed to set the
+** Virtual table implementations are allowed to set the
** [sqlite3_index_info].idxFlags field to some combination of
** these bits.
*/
-#define SQLITE_INDEX_SCAN_UNIQUE 1 /* Scan visits at most 1 row */
+#define SQLITE_INDEX_SCAN_UNIQUE 0x00000001 /* Scan visits at most 1 row */
+#define SQLITE_INDEX_SCAN_HEX 0x00000002 /* Display idxNum as hex */
+ /* in EXPLAIN QUERY PLAN */
/*
** CAPI3REF: Virtual Table Constraint Operator Codes
**
** These macros define the allowed values for the
** [sqlite3_index_info].aConstraint[].op field. Each value represents
-** an operator that is part of a constraint term in the wHERE clause of
+** an operator that is part of a constraint term in the WHERE clause of
** a query that uses a [virtual table].
-*/
-#define SQLITE_INDEX_CONSTRAINT_EQ 2
-#define SQLITE_INDEX_CONSTRAINT_GT 4
-#define SQLITE_INDEX_CONSTRAINT_LE 8
-#define SQLITE_INDEX_CONSTRAINT_LT 16
-#define SQLITE_INDEX_CONSTRAINT_GE 32
-#define SQLITE_INDEX_CONSTRAINT_MATCH 64
-#define SQLITE_INDEX_CONSTRAINT_LIKE 65
-#define SQLITE_INDEX_CONSTRAINT_GLOB 66
-#define SQLITE_INDEX_CONSTRAINT_REGEXP 67
-#define SQLITE_INDEX_CONSTRAINT_NE 68
-#define SQLITE_INDEX_CONSTRAINT_ISNOT 69
-#define SQLITE_INDEX_CONSTRAINT_ISNOTNULL 70
-#define SQLITE_INDEX_CONSTRAINT_ISNULL 71
-#define SQLITE_INDEX_CONSTRAINT_IS 72
-#define SQLITE_INDEX_CONSTRAINT_FUNCTION 150
+**
+** ^The left-hand operand of the operator is given by the corresponding
+** aConstraint[].iColumn field. ^An iColumn of -1 indicates the left-hand
+** operand is the rowid.
+** The SQLITE_INDEX_CONSTRAINT_LIMIT and SQLITE_INDEX_CONSTRAINT_OFFSET
+** operators have no left-hand operand, and so for those operators the
+** corresponding aConstraint[].iColumn is meaningless and should not be
+** used.
+**
+** All operator values from SQLITE_INDEX_CONSTRAINT_FUNCTION through
+** value 255 are reserved to represent functions that are overloaded
+** by the [xFindFunction|xFindFunction method] of the virtual table
+** implementation.
+**
+** The right-hand operands for each constraint might be accessible using
+** the [sqlite3_vtab_rhs_value()] interface. Usually the right-hand
+** operand is only available if it appears as a single constant literal
+** in the input SQL. If the right-hand operand is another column or an
+** expression (even a constant expression) or a parameter, then the
+** sqlite3_vtab_rhs_value() probably will not be able to extract it.
+** ^The SQLITE_INDEX_CONSTRAINT_ISNULL and
+** SQLITE_INDEX_CONSTRAINT_ISNOTNULL operators have no right-hand operand
+** and hence calls to sqlite3_vtab_rhs_value() for those operators will
+** always return SQLITE_NOTFOUND.
+**
+** The collating sequence to be used for comparison can be found using
+** the [sqlite3_vtab_collation()] interface. For most real-world virtual
+** tables, the collating sequence of constraints does not matter (for example
+** because the constraints are numeric) and so the sqlite3_vtab_collation()
+** interface is not commonly needed.
+*/
+#define SQLITE_INDEX_CONSTRAINT_EQ 2
+#define SQLITE_INDEX_CONSTRAINT_GT 4
+#define SQLITE_INDEX_CONSTRAINT_LE 8
+#define SQLITE_INDEX_CONSTRAINT_LT 16
+#define SQLITE_INDEX_CONSTRAINT_GE 32
+#define SQLITE_INDEX_CONSTRAINT_MATCH 64
+#define SQLITE_INDEX_CONSTRAINT_LIKE 65
+#define SQLITE_INDEX_CONSTRAINT_GLOB 66
+#define SQLITE_INDEX_CONSTRAINT_REGEXP 67
+#define SQLITE_INDEX_CONSTRAINT_NE 68
+#define SQLITE_INDEX_CONSTRAINT_ISNOT 69
+#define SQLITE_INDEX_CONSTRAINT_ISNOTNULL 70
+#define SQLITE_INDEX_CONSTRAINT_ISNULL 71
+#define SQLITE_INDEX_CONSTRAINT_IS 72
+#define SQLITE_INDEX_CONSTRAINT_LIMIT 73
+#define SQLITE_INDEX_CONSTRAINT_OFFSET 74
+#define SQLITE_INDEX_CONSTRAINT_FUNCTION 150
/*
** CAPI3REF: Register A Virtual Table Implementation
@@ -6836,12 +7824,12 @@ struct sqlite3_index_info {
** preexisting [virtual table] for the module.
**
** ^The module name is registered on the [database connection] specified
-** by the first parameter. ^The name of the module is given by the
+** by the first parameter. ^The name of the module is given by the
** second parameter. ^The third parameter is a pointer to
** the implementation of the [virtual table module]. ^The fourth
** parameter is an arbitrary client data pointer that is passed through
** into the [xCreate] and [xConnect] methods of the virtual table module
-** when a new virtual table is be being created or reinitialized.
+** when a new virtual table is being created or reinitialized.
**
** ^The sqlite3_create_module_v2() interface has a fifth parameter which
** is a pointer to a destructor for the pClientData. ^SQLite will
@@ -6853,7 +7841,7 @@ struct sqlite3_index_info {
** destructor.
**
** ^If the third parameter (the pointer to the sqlite3_module object) is
-** NULL then no new module is create and any existing modules with the
+** NULL then no new module is created and any existing modules with the
** same name are dropped.
**
** See also: [sqlite3_drop_modules()]
@@ -6951,7 +7939,7 @@ SQLITE_API int sqlite3_declare_vtab(sqlite3*, const char *zSQL);
** METHOD: sqlite3
**
** ^(Virtual tables can provide alternative implementations of functions
-** using the [xFindFunction] method of the [virtual table module].
+** using the [xFindFunction] method of the [virtual table module].
** But global versions of those functions
** must exist in order to be overloaded.)^
**
@@ -6965,16 +7953,6 @@ SQLITE_API int sqlite3_declare_vtab(sqlite3*, const char *zSQL);
*/
SQLITE_API int sqlite3_overload_function(sqlite3*, const char *zFuncName, int nArg);
-/*
-** The interface to the virtual-table mechanism defined above (back up
-** to a comment remarkably similar to this one) is currently considered
-** to be experimental. The interface might change in incompatible ways.
-** If this is a problem for you, do not use the interface at this time.
-**
-** When the virtual-table mechanism stabilizes, we will declare the
-** interface fixed, support it indefinitely, and remove this comment.
-*/
-
/*
** CAPI3REF: A Handle To An Open BLOB
** KEYWORDS: {BLOB handle} {BLOB handles}
@@ -7002,7 +7980,7 @@ typedef struct sqlite3_blob sqlite3_blob;
** SELECT zColumn FROM zDb.zTable WHERE [rowid] = iRow;
** )^
**
-** ^(Parameter zDb is not the filename that contains the database, but
+** ^(Parameter zDb is not the filename that contains the database, but
** rather the symbolic name of the database. For attached databases, this is
** the name that appears after the AS keyword in the [ATTACH] statement.
** For the main database file, the database name is "main". For TEMP
@@ -7015,28 +7993,28 @@ typedef struct sqlite3_blob sqlite3_blob;
** ^(On success, [SQLITE_OK] is returned and the new [BLOB handle] is stored
** in *ppBlob. Otherwise an [error code] is returned and, unless the error
** code is SQLITE_MISUSE, *ppBlob is set to NULL.)^ ^This means that, provided
-** the API is not misused, it is always safe to call [sqlite3_blob_close()]
-** on *ppBlob after this function it returns.
+** the API is not misused, it is always safe to call [sqlite3_blob_close()]
+** on *ppBlob after this function returns.
**
** This function fails with SQLITE_ERROR if any of the following are true:
**
-** - ^(Database zDb does not exist)^,
-**
- ^(Table zTable does not exist within database zDb)^,
-**
- ^(Table zTable is a WITHOUT ROWID table)^,
+**
- ^(Database zDb does not exist)^,
+**
- ^(Table zTable does not exist within database zDb)^,
+**
- ^(Table zTable is a WITHOUT ROWID table)^,
**
- ^(Column zColumn does not exist)^,
**
- ^(Row iRow is not present in the table)^,
**
- ^(The specified column of row iRow contains a value that is not
** a TEXT or BLOB value)^,
-**
- ^(Column zColumn is part of an index, PRIMARY KEY or UNIQUE
+**
- ^(Column zColumn is part of an index, PRIMARY KEY or UNIQUE
** constraint and the blob is being opened for read/write access)^,
-**
- ^([foreign key constraints | Foreign key constraints] are enabled,
+**
- ^([foreign key constraints | Foreign key constraints] are enabled,
** column zColumn is part of a [child key] definition and the blob is
** being opened for read/write access)^.
**
**
-** ^Unless it returns SQLITE_MISUSE, this function sets the
-** [database connection] error code and message accessible via
-** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions.
+** ^Unless it returns SQLITE_MISUSE, this function sets the
+** [database connection] error code and message accessible via
+** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions.
**
** A BLOB referenced by sqlite3_blob_open() may be read using the
** [sqlite3_blob_read()] interface and modified by using
@@ -7062,7 +8040,7 @@ typedef struct sqlite3_blob sqlite3_blob;
** blob.
**
** ^The [sqlite3_bind_zeroblob()] and [sqlite3_result_zeroblob()] interfaces
-** and the built-in [zeroblob] SQL function may be used to create a
+** and the built-in [zeroblob] SQL function may be used to create a
** zero-filled blob to read or write using the incremental-blob interface.
**
** To avoid a resource leak, every open [BLOB handle] should eventually
@@ -7112,7 +8090,7 @@ SQLITE_API int sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64);
** DESTRUCTOR: sqlite3_blob
**
** ^This function closes an open [BLOB handle]. ^(The BLOB handle is closed
-** unconditionally. Even if this routine returns an error code, the
+** unconditionally. Even if this routine returns an error code, the
** handle is still closed.)^
**
** ^If the blob handle being closed was opened for read-write access, and if
@@ -7122,10 +8100,10 @@ SQLITE_API int sqlite3_blob_reopen(sqlite3_blob *, sqlite3_int64);
** code is returned and the transaction rolled back.
**
** Calling this function with an argument that is not a NULL pointer or an
-** open blob handle results in undefined behaviour. ^Calling this routine
-** with a null pointer (such as would be returned by a failed call to
+** open blob handle results in undefined behavior. ^Calling this routine
+** with a null pointer (such as would be returned by a failed call to
** [sqlite3_blob_open()]) is a harmless no-op. ^Otherwise, if this function
-** is passed a valid open blob handle, the values returned by the
+** is passed a valid open blob handle, the values returned by the
** sqlite3_errcode() and sqlite3_errmsg() functions are set before returning.
*/
SQLITE_API int sqlite3_blob_close(sqlite3_blob *);
@@ -7134,9 +8112,9 @@ SQLITE_API int sqlite3_blob_close(sqlite3_blob *);
** CAPI3REF: Return The Size Of An Open BLOB
** METHOD: sqlite3_blob
**
-** ^Returns the size in bytes of the BLOB accessible via the
+** ^Returns the size in bytes of the BLOB accessible via the
** successfully opened [BLOB handle] in its only argument. ^The
-** incremental blob I/O routines can only read or overwriting existing
+** incremental blob I/O routines can only read or overwrite existing
** blob content; they cannot change the size of a blob.
**
** This routine only works on a [BLOB handle] which has been created
@@ -7185,9 +8163,9 @@ SQLITE_API int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset);
**
** ^(On success, sqlite3_blob_write() returns SQLITE_OK.
** Otherwise, an [error code] or an [extended error code] is returned.)^
-** ^Unless SQLITE_MISUSE is returned, this function sets the
-** [database connection] error code and message accessible via
-** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions.
+** ^Unless SQLITE_MISUSE is returned, this function sets the
+** [database connection] error code and message accessible via
+** [sqlite3_errcode()] and [sqlite3_errmsg()] and related functions.
**
** ^If the [BLOB handle] passed as the first argument was not opened for
** writing (the flags parameter to [sqlite3_blob_open()] was zero),
@@ -7196,9 +8174,9 @@ SQLITE_API int sqlite3_blob_read(sqlite3_blob *, void *Z, int N, int iOffset);
** This function may only modify the contents of the BLOB; it is
** not possible to increase the size of a BLOB using this API.
** ^If offset iOffset is less than N bytes from the end of the BLOB,
-** [SQLITE_ERROR] is returned and no data is written. The size of the
-** BLOB (and hence the maximum value of N+iOffset) can be determined
-** using the [sqlite3_blob_bytes()] interface. ^If N or iOffset are less
+** [SQLITE_ERROR] is returned and no data is written. The size of the
+** BLOB (and hence the maximum value of N+iOffset) can be determined
+** using the [sqlite3_blob_bytes()] interface. ^If N or iOffset are less
** than zero [SQLITE_ERROR] is returned and no data is written.
**
** ^An attempt to write to an expired [BLOB handle] fails with an
@@ -7286,13 +8264,13 @@ SQLITE_API int sqlite3_vfs_unregister(sqlite3_vfs*);
** ^The sqlite3_mutex_alloc() routine allocates a new
** mutex and returns a pointer to it. ^The sqlite3_mutex_alloc()
** routine returns NULL if it is unable to allocate the requested
-** mutex. The argument to sqlite3_mutex_alloc() must one of these
+** mutex. The argument to sqlite3_mutex_alloc() must be one of these
** integer constants:
**
**
** - SQLITE_MUTEX_FAST
**
- SQLITE_MUTEX_RECURSIVE
-**
- SQLITE_MUTEX_STATIC_MASTER
+**
- SQLITE_MUTEX_STATIC_MAIN
**
- SQLITE_MUTEX_STATIC_MEM
**
- SQLITE_MUTEX_STATIC_OPEN
**
- SQLITE_MUTEX_STATIC_PRNG
@@ -7349,18 +8327,20 @@ SQLITE_API int sqlite3_vfs_unregister(sqlite3_vfs*);
**
** ^(Some systems (for example, Windows 95) do not support the operation
** implemented by sqlite3_mutex_try(). On those systems, sqlite3_mutex_try()
-** will always return SQLITE_BUSY. The SQLite core only ever uses
-** sqlite3_mutex_try() as an optimization so this is acceptable
-** behavior.)^
+** will always return SQLITE_BUSY. In most cases the SQLite core only uses
+** sqlite3_mutex_try() as an optimization, so this is acceptable
+** behavior. The exceptions are unix builds that set the
+** SQLITE_ENABLE_SETLK_TIMEOUT build option. In that case a working
+** sqlite3_mutex_try() is required.)^
**
** ^The sqlite3_mutex_leave() routine exits a mutex that was
** previously entered by the same thread. The behavior
** is undefined if the mutex is not currently entered by the
** calling thread or is not currently allocated.
**
-** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(), or
-** sqlite3_mutex_leave() is a NULL pointer, then all three routines
-** behave as no-ops.
+** ^If the argument to sqlite3_mutex_enter(), sqlite3_mutex_try(),
+** sqlite3_mutex_leave(), or sqlite3_mutex_free() is a NULL pointer,
+** then any of the four routines behaves as a no-op.
**
** See also: [sqlite3_mutex_held()] and [sqlite3_mutex_notheld()].
*/
@@ -7494,7 +8474,7 @@ SQLITE_API int sqlite3_mutex_notheld(sqlite3_mutex*);
*/
#define SQLITE_MUTEX_FAST 0
#define SQLITE_MUTEX_RECURSIVE 1
-#define SQLITE_MUTEX_STATIC_MASTER 2
+#define SQLITE_MUTEX_STATIC_MAIN 2
#define SQLITE_MUTEX_STATIC_MEM 3 /* sqlite3_malloc() */
#define SQLITE_MUTEX_STATIC_MEM2 4 /* NOT USED */
#define SQLITE_MUTEX_STATIC_OPEN 4 /* sqlite3BtreeOpen() */
@@ -7509,11 +8489,15 @@ SQLITE_API int sqlite3_mutex_notheld(sqlite3_mutex*);
#define SQLITE_MUTEX_STATIC_VFS2 12 /* For use by extension VFS */
#define SQLITE_MUTEX_STATIC_VFS3 13 /* For use by application VFS */
+/* Legacy compatibility: */
+#define SQLITE_MUTEX_STATIC_MASTER 2
+
+
/*
** CAPI3REF: Retrieve the mutex for a database connection
** METHOD: sqlite3
**
-** ^This interface returns a pointer the [sqlite3_mutex] object that
+** ^This interface returns a pointer to the [sqlite3_mutex] object that
** serializes access to the [database connection] given in the argument
** when the [threading mode] is Serialized.
** ^If the [threading mode] is Single-thread or Multi-thread then this
@@ -7540,7 +8524,7 @@ SQLITE_API sqlite3_mutex *sqlite3_db_mutex(sqlite3*);
** method becomes the return value of this routine.
**
** A few opcodes for [sqlite3_file_control()] are handled directly
-** by the SQLite core and never invoke the
+** by the SQLite core and never invoke the
** sqlite3_io_methods.xFileControl method.
** ^The [SQLITE_FCNTL_FILE_POINTER] value for the op parameter causes
** a pointer to the underlying [sqlite3_file] object to be written into
@@ -7598,15 +8582,18 @@ SQLITE_API int sqlite3_test_control(int op, ...);
#define SQLITE_TESTCTRL_PRNG_SAVE 5
#define SQLITE_TESTCTRL_PRNG_RESTORE 6
#define SQLITE_TESTCTRL_PRNG_RESET 7 /* NOT USED */
+#define SQLITE_TESTCTRL_FK_NO_ACTION 7
#define SQLITE_TESTCTRL_BITVEC_TEST 8
#define SQLITE_TESTCTRL_FAULT_INSTALL 9
#define SQLITE_TESTCTRL_BENIGN_MALLOC_HOOKS 10
#define SQLITE_TESTCTRL_PENDING_BYTE 11
#define SQLITE_TESTCTRL_ASSERT 12
#define SQLITE_TESTCTRL_ALWAYS 13
-#define SQLITE_TESTCTRL_RESERVE 14
+#define SQLITE_TESTCTRL_RESERVE 14 /* NOT USED */
+#define SQLITE_TESTCTRL_JSON_SELFCHECK 14
#define SQLITE_TESTCTRL_OPTIMIZATIONS 15
#define SQLITE_TESTCTRL_ISKEYWORD 16 /* NOT USED */
+#define SQLITE_TESTCTRL_GETOPT 16
#define SQLITE_TESTCTRL_SCRATCHMALLOC 17 /* NOT USED */
#define SQLITE_TESTCTRL_INTERNAL_FUNCTIONS 17
#define SQLITE_TESTCTRL_LOCALTIME_FAULT 18
@@ -7622,20 +8609,25 @@ SQLITE_API int sqlite3_test_control(int op, ...);
#define SQLITE_TESTCTRL_RESULT_INTREAL 27
#define SQLITE_TESTCTRL_PRNG_SEED 28
#define SQLITE_TESTCTRL_EXTRA_SCHEMA_CHECKS 29
-#define SQLITE_TESTCTRL_LAST 29 /* Largest TESTCTRL */
+#define SQLITE_TESTCTRL_SEEK_COUNT 30
+#define SQLITE_TESTCTRL_TRACEFLAGS 31
+#define SQLITE_TESTCTRL_TUNE 32
+#define SQLITE_TESTCTRL_LOGEST 33
+#define SQLITE_TESTCTRL_USELONGDOUBLE 34 /* NOT USED */
+#define SQLITE_TESTCTRL_LAST 34 /* Largest TESTCTRL */
/*
** CAPI3REF: SQL Keyword Checking
**
-** These routines provide access to the set of SQL language keywords
-** recognized by SQLite. Applications can uses these routines to determine
+** These routines provide access to the set of SQL language keywords
+** recognized by SQLite. Applications can use these routines to determine
** whether or not a specific identifier needs to be escaped (for example,
** by enclosing in double-quotes) so as not to confuse the parser.
**
** The sqlite3_keyword_count() interface returns the number of distinct
** keywords understood by SQLite.
**
-** The sqlite3_keyword_name(N,Z,L) interface finds the N-th keyword and
+** The sqlite3_keyword_name(N,Z,L) interface finds the 0-based N-th keyword and
** makes *Z point to that keyword expressed as UTF8 and writes the number
** of bytes in the keyword into *L. The string that *Z points to is not
** zero-terminated. The sqlite3_keyword_name(N,Z,L) routine returns
@@ -7699,14 +8691,14 @@ typedef struct sqlite3_str sqlite3_str;
**
** ^The [sqlite3_str_new(D)] interface allocates and initializes
** a new [sqlite3_str] object. To avoid memory leaks, the object returned by
-** [sqlite3_str_new()] must be freed by a subsequent call to
+** [sqlite3_str_new()] must be freed by a subsequent call to
** [sqlite3_str_finish(X)].
**
** ^The [sqlite3_str_new(D)] interface always returns a pointer to a
** valid [sqlite3_str] object, though in the event of an out-of-memory
** error the returned object might be a special singleton that will
-** silently reject new text, always return SQLITE_NOMEM from
-** [sqlite3_str_errcode()], always return 0 for
+** silently reject new text, always return SQLITE_NOMEM from
+** [sqlite3_str_errcode()], always return 0 for
** [sqlite3_str_length()], and always return NULL from
** [sqlite3_str_finish(X)]. It is always safe to use the value
** returned by [sqlite3_str_new(D)] as the sqlite3_str parameter
@@ -7742,9 +8734,9 @@ SQLITE_API char *sqlite3_str_finish(sqlite3_str*);
** These interfaces add content to an sqlite3_str object previously obtained
** from [sqlite3_str_new()].
**
-** ^The [sqlite3_str_appendf(X,F,...)] and
+** ^The [sqlite3_str_appendf(X,F,...)] and
** [sqlite3_str_vappendf(X,F,V)] interfaces uses the [built-in printf]
-** functionality of SQLite to append formatted text onto the end of
+** functionality of SQLite to append formatted text onto the end of
** [sqlite3_str] object X.
**
** ^The [sqlite3_str_append(X,S,N)] method appends exactly N bytes from string S
@@ -7761,7 +8753,7 @@ SQLITE_API char *sqlite3_str_finish(sqlite3_str*);
** ^This method can be used, for example, to add whitespace indentation.
**
** ^The [sqlite3_str_reset(X)] method resets the string under construction
-** inside [sqlite3_str] object X back to zero bytes in length.
+** inside [sqlite3_str] object X back to zero bytes in length.
**
** These methods do not return a result code. ^If an error occurs, that fact
** is recorded in the [sqlite3_str] object and can be recovered by a
@@ -7796,7 +8788,7 @@ SQLITE_API void sqlite3_str_reset(sqlite3_str*);
** content of the dynamic string under construction in X. The value
** returned by [sqlite3_str_value(X)] is managed by the sqlite3_str object X
** and might be freed or altered by any subsequent method on the same
-** [sqlite3_str] object. Applications must not used the pointer returned
+** [sqlite3_str] object. Applications must not use the pointer returned by
** [sqlite3_str_value(X)] after any subsequent method call on the same
** object. ^Applications may change the content of the string returned
** by [sqlite3_str_value(X)] as long as they do not write into any bytes
@@ -7863,7 +8855,7 @@ SQLITE_API int sqlite3_status64(
**
- This parameter records the largest memory allocation request
** handed to [sqlite3_malloc()] or [sqlite3_realloc()] (or their
** internal equivalents). Only the value returned in the
-** *pHighwater parameter to [sqlite3_status()] is of interest.
+** *pHighwater parameter to [sqlite3_status()] is of interest.
** The value written into the *pCurrent parameter is undefined.
)^
**
** [[SQLITE_STATUS_MALLOC_COUNT]] ^(- SQLITE_STATUS_MALLOC_COUNT
@@ -7872,24 +8864,24 @@ SQLITE_API int sqlite3_status64(
**
** [[SQLITE_STATUS_PAGECACHE_USED]] ^(- SQLITE_STATUS_PAGECACHE_USED
** - This parameter returns the number of pages used out of the
-** [pagecache memory allocator] that was configured using
+** [pagecache memory allocator] that was configured using
** [SQLITE_CONFIG_PAGECACHE]. The
** value returned is in pages, not in bytes.
)^
**
-** [[SQLITE_STATUS_PAGECACHE_OVERFLOW]]
+** [[SQLITE_STATUS_PAGECACHE_OVERFLOW]]
** ^(- SQLITE_STATUS_PAGECACHE_OVERFLOW
** - This parameter returns the number of bytes of page cache
** allocation which could not be satisfied by the [SQLITE_CONFIG_PAGECACHE]
** buffer and where forced to overflow to [sqlite3_malloc()]. The
** returned value includes allocations that overflowed because they
-** where too large (they were larger than the "sz" parameter to
+** were too large (they were larger than the "sz" parameter to
** [SQLITE_CONFIG_PAGECACHE]) and allocations that overflowed because
** no space was left in the page cache.
)^
**
** [[SQLITE_STATUS_PAGECACHE_SIZE]] ^(- SQLITE_STATUS_PAGECACHE_SIZE
** - This parameter records the largest memory allocation request
** handed to the [pagecache memory allocator]. Only the value returned in the
-** *pHighwater parameter to [sqlite3_status()] is of interest.
+** *pHighwater parameter to [sqlite3_status()] is of interest.
** The value written into the *pCurrent parameter is undefined.
)^
**
** [[SQLITE_STATUS_SCRATCH_USED]] - SQLITE_STATUS_SCRATCH_USED
@@ -7902,7 +8894,7 @@ SQLITE_API int sqlite3_status64(
** - No longer used.
**
** [[SQLITE_STATUS_PARSER_STACK]] ^(- SQLITE_STATUS_PARSER_STACK
-** - The *pHighwater parameter records the deepest parser stack.
+**
- The *pHighwater parameter records the deepest parser stack.
** The *pCurrent value is undefined. The *pHighwater value is only
** meaningful if SQLite is compiled with [YYTRACKMAXSTACKDEPTH].
)^
**
@@ -7924,12 +8916,12 @@ SQLITE_API int sqlite3_status64(
** CAPI3REF: Database Connection Status
** METHOD: sqlite3
**
-** ^This interface is used to retrieve runtime status information
+** ^This interface is used to retrieve runtime status information
** about a single [database connection]. ^The first argument is the
** database connection object to be interrogated. ^The second argument
** is an integer constant, taken from the set of
** [SQLITE_DBSTATUS options], that
-** determines the parameter to interrogate. The set of
+** determines the parameter to interrogate. The set of
** [SQLITE_DBSTATUS options] is likely
** to grow in future releases of SQLite.
**
@@ -7941,9 +8933,18 @@ SQLITE_API int sqlite3_status64(
** ^The sqlite3_db_status() routine returns SQLITE_OK on success and a
** non-zero [error code] on failure.
**
+** ^The sqlite3_db_status64(D,O,C,H,R) routine works exactly the same
+** way as sqlite3_db_status(D,O,C,H,R) routine except that the C and H
+** parameters are pointer to 64-bit integers (type: sqlite3_int64) instead
+** of pointers to 32-bit integers, which allows larger status values
+** to be returned. If a status value exceeds 2,147,483,647 then
+** sqlite3_db_status() will truncate the value whereas sqlite3_db_status64()
+** will return the full value.
+**
** See also: [sqlite3_status()] and [sqlite3_stmt_status()].
*/
SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int resetFlg);
+SQLITE_API int sqlite3_db_status64(sqlite3*,int,sqlite3_int64*,sqlite3_int64*,int);
/*
** CAPI3REF: Status Parameters for database connections
@@ -7964,51 +8965,53 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
** checked out.)^
**
** [[SQLITE_DBSTATUS_LOOKASIDE_HIT]] ^(SQLITE_DBSTATUS_LOOKASIDE_HIT
-** This parameter returns the number of malloc attempts that were
+** This parameter returns the number of malloc attempts that were
** satisfied using lookaside memory. Only the high-water value is meaningful;
-** the current value is always zero.)^
+** the current value is always zero.)^
**
** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE]]
** ^(SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE
-** This parameter returns the number malloc attempts that might have
+** This parameter returns the number of malloc attempts that might have
** been satisfied using lookaside memory but failed due to the amount of
** memory requested being larger than the lookaside slot size.
** Only the high-water value is meaningful;
-** the current value is always zero.)^
+** the current value is always zero.)^
**
** [[SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL]]
** ^(SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL
-** This parameter returns the number malloc attempts that might have
+** This parameter returns the number of malloc attempts that might have
** been satisfied using lookaside memory but failed due to all lookaside
** memory already being in use.
** Only the high-water value is meaningful;
-** the current value is always zero.)^
+** the current value is always zero.)^
**
** [[SQLITE_DBSTATUS_CACHE_USED]] ^(SQLITE_DBSTATUS_CACHE_USED
** This parameter returns the approximate number of bytes of heap
** memory used by all pager caches associated with the database connection.)^
** ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_USED is always 0.
+**
**
-** [[SQLITE_DBSTATUS_CACHE_USED_SHARED]]
+** [[SQLITE_DBSTATUS_CACHE_USED_SHARED]]
** ^(SQLITE_DBSTATUS_CACHE_USED_SHARED
** This parameter is similar to DBSTATUS_CACHE_USED, except that if a
** pager cache is shared between two or more connections the bytes of heap
** memory used by that pager cache is divided evenly between the attached
** connections.)^ In other words, if none of the pager caches associated
** with the database connection are shared, this request returns the same
-** value as DBSTATUS_CACHE_USED. Or, if one or more or the pager caches are
+** value as DBSTATUS_CACHE_USED. Or, if one or more of the pager caches are
** shared, the value returned by this call will be smaller than that returned
** by DBSTATUS_CACHE_USED. ^The highwater mark associated with
-** SQLITE_DBSTATUS_CACHE_USED_SHARED is always 0.
+** SQLITE_DBSTATUS_CACHE_USED_SHARED is always 0.
**
** [[SQLITE_DBSTATUS_SCHEMA_USED]] ^(SQLITE_DBSTATUS_SCHEMA_USED
** This parameter returns the approximate number of bytes of heap
** memory used to store the schema for all databases associated
-** with the connection - main, temp, and any [ATTACH]-ed databases.)^
+** with the connection - main, temp, and any [ATTACH]-ed databases.)^
** ^The full amount of memory used by the schemas is reported, even if the
** schema memory is shared with other database connections due to
** [shared cache mode] being enabled.
** ^The highwater mark associated with SQLITE_DBSTATUS_SCHEMA_USED is always 0.
+**
**
** [[SQLITE_DBSTATUS_STMT_USED]] ^(SQLITE_DBSTATUS_STMT_USED
** This parameter returns the approximate number of bytes of heap
@@ -8019,13 +9022,13 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
**
** [[SQLITE_DBSTATUS_CACHE_HIT]] ^(SQLITE_DBSTATUS_CACHE_HIT
** This parameter returns the number of pager cache hits that have
-** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_HIT
+** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_HIT
** is always 0.
**
**
** [[SQLITE_DBSTATUS_CACHE_MISS]] ^(SQLITE_DBSTATUS_CACHE_MISS
** This parameter returns the number of pager cache misses that have
-** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_MISS
+** occurred.)^ ^The highwater mark associated with SQLITE_DBSTATUS_CACHE_MISS
** is always 0.
**
**
@@ -8038,6 +9041,10 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
** If an IO or other error occurs while writing a page to disk, the effect
** on subsequent SQLITE_DBSTATUS_CACHE_WRITE requests is undefined.)^ ^The
** highwater mark associated with SQLITE_DBSTATUS_CACHE_WRITE is always 0.
+**
+** ^(There is overlap between the quantities measured by this parameter
+** (SQLITE_DBSTATUS_CACHE_WRITE) and SQLITE_DBSTATUS_TEMPBUF_SPILL.
+** Resetting one will reduce the other.)^
**
**
** [[SQLITE_DBSTATUS_CACHE_SPILL]] ^( SQLITE_DBSTATUS_CACHE_SPILL
@@ -8045,7 +9052,7 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
** been written to disk in the middle of a transaction due to the page
** cache overflowing. Transactions are more efficient if they are written
** to disk all at once. When pages spill mid-transaction, that introduces
-** additional overhead. This parameter can be used help identify
+** additional overhead. This parameter can be used to help identify
** inefficiencies that can be resolved by increasing the cache size.
**
**
@@ -8053,6 +9060,18 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
** This parameter returns zero for the current value if and only if
** all foreign key constraints (deferred or immediate) have been
** resolved.)^ ^The highwater mark is always 0.
+**
+** [[SQLITE_DBSTATUS_TEMPBUF_SPILL] ^(SQLITE_DBSTATUS_TEMPBUF_SPILL
+** ^(This parameter returns the number of bytes written to temporary
+** files on disk that could have been kept in memory had sufficient memory
+** been available. This value includes writes to intermediate tables that
+** are part of complex queries, external sorts that spill to disk, and
+** writes to TEMP tables.)^
+** ^The highwater mark is always 0.
+**
+** ^(There is overlap between the quantities measured by this parameter
+** (SQLITE_DBSTATUS_TEMPBUF_SPILL) and SQLITE_DBSTATUS_CACHE_WRITE.
+** Resetting one will reduce the other.)^
**
**
*/
@@ -8069,7 +9088,8 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
#define SQLITE_DBSTATUS_DEFERRED_FKS 10
#define SQLITE_DBSTATUS_CACHE_USED_SHARED 11
#define SQLITE_DBSTATUS_CACHE_SPILL 12
-#define SQLITE_DBSTATUS_MAX 12 /* Largest defined DBSTATUS */
+#define SQLITE_DBSTATUS_TEMPBUF_SPILL 13
+#define SQLITE_DBSTATUS_MAX 13 /* Largest defined DBSTATUS */
/*
@@ -8083,7 +9103,7 @@ SQLITE_API int sqlite3_db_status(sqlite3*, int op, int *pCur, int *pHiwtr, int r
** statements. For example, if the number of table steps greatly exceeds
** the number of table searches or result rows, that would tend to indicate
** that the prepared statement is using a full table scan rather than
-** an index.
+** an index.
**
** ^(This interface is used to retrieve and reset counter values from
** a [prepared statement]. The first argument is the prepared statement
@@ -8110,40 +9130,50 @@ SQLITE_API int sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg);
** [[SQLITE_STMTSTATUS_FULLSCAN_STEP]] SQLITE_STMTSTATUS_FULLSCAN_STEP
** ^This is the number of times that SQLite has stepped forward in
** a table as part of a full table scan. Large numbers for this counter
-** may indicate opportunities for performance improvement through
+** may indicate opportunities for performance improvement through
** careful use of indices.
**
** [[SQLITE_STMTSTATUS_SORT]] SQLITE_STMTSTATUS_SORT
** ^This is the number of sort operations that have occurred.
** A non-zero value in this counter may indicate an opportunity to
-** improvement performance through careful use of indices.
+** improve performance through careful use of indices.
**
** [[SQLITE_STMTSTATUS_AUTOINDEX]] SQLITE_STMTSTATUS_AUTOINDEX
** ^This is the number of rows inserted into transient indices that
** were created automatically in order to help joins run faster.
** A non-zero value in this counter may indicate an opportunity to
-** improvement performance by adding permanent indices that do not
+** improve performance by adding permanent indices that do not
** need to be reinitialized each time the statement is run.
**
** [[SQLITE_STMTSTATUS_VM_STEP]] SQLITE_STMTSTATUS_VM_STEP
** ^This is the number of virtual machine operations executed
** by the prepared statement if that number is less than or equal
-** to 2147483647. The number of virtual machine operations can be
+** to 2147483647. The number of virtual machine operations can be
** used as a proxy for the total work done by the prepared statement.
** If the number of virtual machine operations exceeds 2147483647
-** then the value returned by this statement status code is undefined.
+** then the value returned by this statement status code is undefined.
**
** [[SQLITE_STMTSTATUS_REPREPARE]] SQLITE_STMTSTATUS_REPREPARE
** ^This is the number of times that the prepare statement has been
-** automatically regenerated due to schema changes or changes to
-** [bound parameters] that might affect the query plan.
+** automatically regenerated due to schema changes or changes to
+** [bound parameters] that might affect the query plan.
**
** [[SQLITE_STMTSTATUS_RUN]] SQLITE_STMTSTATUS_RUN
** ^This is the number of times that the prepared statement has
** been run. A single "run" for the purposes of this counter is one
** or more calls to [sqlite3_step()] followed by a call to [sqlite3_reset()].
** The counter is incremented on the first [sqlite3_step()] call of each
-** cycle.
+** cycle.
+**
+** [[SQLITE_STMTSTATUS_FILTER_MISS]]
+** [[SQLITE_STMTSTATUS_FILTER HIT]]
+** SQLITE_STMTSTATUS_FILTER_HIT
+** SQLITE_STMTSTATUS_FILTER_MISS
+** ^SQLITE_STMTSTATUS_FILTER_HIT is the number of times that a join
+** step was bypassed because a Bloom filter returned not-found. The
+** corresponding SQLITE_STMTSTATUS_FILTER_MISS value is the number of
+** times that the Bloom filter returned a find, and thus the join step
+** had to be processed as normal.
**
** [[SQLITE_STMTSTATUS_MEMUSED]] SQLITE_STMTSTATUS_MEMUSED
** ^This is the approximate number of bytes of heap memory
@@ -8159,6 +9189,8 @@ SQLITE_API int sqlite3_stmt_status(sqlite3_stmt*, int op,int resetFlg);
#define SQLITE_STMTSTATUS_VM_STEP 4
#define SQLITE_STMTSTATUS_REPREPARE 5
#define SQLITE_STMTSTATUS_RUN 6
+#define SQLITE_STMTSTATUS_FILTER_MISS 7
+#define SQLITE_STMTSTATUS_FILTER_HIT 8
#define SQLITE_STMTSTATUS_MEMUSED 99
/*
@@ -8195,15 +9227,15 @@ struct sqlite3_pcache_page {
** KEYWORDS: {page cache}
**
** ^(The [sqlite3_config]([SQLITE_CONFIG_PCACHE2], ...) interface can
-** register an alternative page cache implementation by passing in an
+** register an alternative page cache implementation by passing in an
** instance of the sqlite3_pcache_methods2 structure.)^
-** In many applications, most of the heap memory allocated by
+** In many applications, most of the heap memory allocated by
** SQLite is used for the page cache.
-** By implementing a
+** By implementing a
** custom page cache using this API, an application can better control
-** the amount of memory consumed by SQLite, the way in which
-** that memory is allocated and released, and the policies used to
-** determine exactly which parts of a database file are cached and for
+** the amount of memory consumed by SQLite, the way in which
+** that memory is allocated and released, and the policies used to
+** determine exactly which parts of a database file are cached and for
** how long.
**
** The alternative page cache mechanism is an
@@ -8216,19 +9248,19 @@ struct sqlite3_pcache_page {
** [sqlite3_config()] returns.)^
**
** [[the xInit() page cache method]]
-** ^(The xInit() method is called once for each effective
+** ^(The xInit() method is called once for each effective
** call to [sqlite3_initialize()])^
** (usually only once during the lifetime of the process). ^(The xInit()
** method is passed a copy of the sqlite3_pcache_methods2.pArg value.)^
-** The intent of the xInit() method is to set up global data structures
-** required by the custom page cache implementation.
-** ^(If the xInit() method is NULL, then the
+** The intent of the xInit() method is to set up global data structures
+** required by the custom page cache implementation.
+** ^(If the xInit() method is NULL, then the
** built-in default page cache is used instead of the application defined
** page cache.)^
**
** [[the xShutdown() page cache method]]
** ^The xShutdown() method is called by [sqlite3_shutdown()].
-** It can be used to clean up
+** It can be used to clean up
** any outstanding resources before process shutdown, if required.
** ^The xShutdown() method may be NULL.
**
@@ -8246,9 +9278,9 @@ struct sqlite3_pcache_page {
** SQLite will typically create one cache instance for each open database file,
** though this is not guaranteed. ^The
** first parameter, szPage, is the size in bytes of the pages that must
-** be allocated by the cache. ^szPage will always a power of two. ^The
-** second parameter szExtra is a number of bytes of extra storage
-** associated with each page cache entry. ^The szExtra parameter will
+** be allocated by the cache. ^szPage will always be a power of two. ^The
+** second parameter szExtra is a number of bytes of extra storage
+** associated with each page cache entry. ^The szExtra parameter will be
** a number less than 250. SQLite will use the
** extra szExtra bytes on each page to store metadata about the underlying
** database page on disk. The value passed into szExtra depends
@@ -8256,17 +9288,17 @@ struct sqlite3_pcache_page {
** ^The third argument to xCreate(), bPurgeable, is true if the cache being
** created will be used to cache database pages of a file stored on disk, or
** false if it is used for an in-memory database. The cache implementation
-** does not have to do anything special based with the value of bPurgeable;
+** does not have to do anything special based upon the value of bPurgeable;
** it is purely advisory. ^On a cache where bPurgeable is false, SQLite will
** never invoke xUnpin() except to deliberately delete a page.
** ^In other words, calls to xUnpin() on a cache with bPurgeable set to
-** false will always have the "discard" flag set to true.
-** ^Hence, a cache created with bPurgeable false will
+** false will always have the "discard" flag set to true.
+** ^Hence, a cache created with bPurgeable set to false will
** never contain any unpinned pages.
**
** [[the xCachesize() page cache method]]
** ^(The xCachesize() method may be called at any time by SQLite to set the
-** suggested maximum cache-size (number of pages stored by) the cache
+** suggested maximum cache-size (number of pages stored) for the cache
** instance passed as the first argument. This is the value configured using
** the SQLite "[PRAGMA cache_size]" command.)^ As with the bPurgeable
** parameter, the implementation is not required to do anything with this
@@ -8275,12 +9307,12 @@ struct sqlite3_pcache_page {
** [[the xPagecount() page cache methods]]
** The xPagecount() method must return the number of pages currently
** stored in the cache, both pinned and unpinned.
-**
+**
** [[the xFetch() page cache methods]]
-** The xFetch() method locates a page in the cache and returns a pointer to
+** The xFetch() method locates a page in the cache and returns a pointer to
** an sqlite3_pcache_page object associated with that page, or a NULL pointer.
** The pBuf element of the returned sqlite3_pcache_page object will be a
-** pointer to a buffer of szPage bytes used to store the content of a
+** pointer to a buffer of szPage bytes used to store the content of a
** single database page. The pExtra element of sqlite3_pcache_page will be
** a pointer to the szExtra bytes of extra storage that SQLite has requested
** for each entry in the page cache.
@@ -8293,12 +9325,12 @@ struct sqlite3_pcache_page {
** implementation must return a pointer to the page buffer with its content
** intact. If the requested page is not already in the cache, then the
** cache implementation should use the value of the createFlag
-** parameter to help it determined what action to take:
+** parameter to help it determine what action to take:
**
**
** | createFlag | Behavior when page is not already in cache
** |
|---|
| 0 | Do not allocate a new page. Return NULL.
-** | | 1 | Allocate a new page if it easy and convenient to do so.
+** | | 1 | Allocate a new page if it is easy and convenient to do so.
** Otherwise return NULL.
** | | 2 | Make every effort to allocate a new page. Only return
** NULL if allocating a new page is effectively impossible.
@@ -8315,12 +9347,12 @@ struct sqlite3_pcache_page {
** as its second argument. If the third parameter, discard, is non-zero,
** then the page must be evicted from the cache.
** ^If the discard parameter is
-** zero, then the page may be discarded or retained at the discretion of
+** zero, then the page may be discarded or retained at the discretion of the
** page cache implementation. ^The page cache implementation
** may choose to evict unpinned pages at any time.
**
-** The cache must not perform any reference counting. A single
-** call to xUnpin() unpins the page regardless of the number of prior calls
+** The cache must not perform any reference counting. A single
+** call to xUnpin() unpins the page regardless of the number of prior calls
** to xFetch().
**
** [[the xRekey() page cache methods]]
@@ -8333,7 +9365,7 @@ struct sqlite3_pcache_page {
** When SQLite calls the xTruncate() method, the cache must discard all
** existing cache entries with page numbers (keys) greater than or equal
** to the value of the iLimit parameter passed to xTruncate(). If any
-** of these pages are pinned, they are implicitly unpinned, meaning that
+** of these pages are pinned, they become implicitly unpinned, meaning that
** they can be safely discarded.
**
** [[the xDestroy() page cache method]]
@@ -8360,7 +9392,7 @@ struct sqlite3_pcache_methods2 {
int (*xPagecount)(sqlite3_pcache*);
sqlite3_pcache_page *(*xFetch)(sqlite3_pcache*, unsigned key, int createFlag);
void (*xUnpin)(sqlite3_pcache*, sqlite3_pcache_page*, int discard);
- void (*xRekey)(sqlite3_pcache*, sqlite3_pcache_page*,
+ void (*xRekey)(sqlite3_pcache*, sqlite3_pcache_page*,
unsigned oldKey, unsigned newKey);
void (*xTruncate)(sqlite3_pcache*, unsigned iLimit);
void (*xDestroy)(sqlite3_pcache*);
@@ -8405,7 +9437,7 @@ typedef struct sqlite3_backup sqlite3_backup;
**
** The backup API copies the content of one database into another.
** It is useful either for creating backups of databases or
-** for copying in-memory databases to or from persistent files.
+** for copying in-memory databases to or from persistent files.
**
** See Also: [Using the SQLite Online Backup API]
**
@@ -8416,36 +9448,36 @@ typedef struct sqlite3_backup sqlite3_backup;
** ^Thus, the backup may be performed on a live source database without
** preventing other database connections from
** reading or writing to the source database while the backup is underway.
-**
-** ^(To perform a backup operation:
+**
+** ^(To perform a backup operation:
**
** - sqlite3_backup_init() is called once to initialize the
-** backup,
-**
- sqlite3_backup_step() is called one or more times to transfer
+** backup,
+**
- sqlite3_backup_step() is called one or more times to transfer
** the data between the two databases, and finally
-**
- sqlite3_backup_finish() is called to release all resources
-** associated with the backup operation.
+**
- sqlite3_backup_finish() is called to release all resources
+** associated with the backup operation.
**
)^
** There should be exactly one call to sqlite3_backup_finish() for each
** successful call to sqlite3_backup_init().
**
** [[sqlite3_backup_init()]] sqlite3_backup_init()
**
-** ^The D and N arguments to sqlite3_backup_init(D,N,S,M) are the
-** [database connection] associated with the destination database
+** ^The D and N arguments to sqlite3_backup_init(D,N,S,M) are the
+** [database connection] associated with the destination database
** and the database name, respectively.
** ^The database name is "main" for the main database, "temp" for the
** temporary database, or the name specified after the AS keyword in
** an [ATTACH] statement for an attached database.
-** ^The S and M arguments passed to
+** ^The S and M arguments passed to
** sqlite3_backup_init(D,N,S,M) identify the [database connection]
** and database name of the source database, respectively.
** ^The source and destination [database connections] (parameters S and D)
** must be different or else sqlite3_backup_init(D,N,S,M) will fail with
** an error.
**
-** ^A call to sqlite3_backup_init() will fail, returning NULL, if
-** there is already a read or read-write transaction open on the
+** ^A call to sqlite3_backup_init() will fail, returning NULL, if
+** there is already a read or read-write transaction open on the
** destination database.
**
** ^If an error occurs within sqlite3_backup_init(D,N,S,M), then NULL is
@@ -8457,14 +9489,14 @@ typedef struct sqlite3_backup sqlite3_backup;
** ^A successful call to sqlite3_backup_init() returns a pointer to an
** [sqlite3_backup] object.
** ^The [sqlite3_backup] object may be used with the sqlite3_backup_step() and
-** sqlite3_backup_finish() functions to perform the specified backup
+** sqlite3_backup_finish() functions to perform the specified backup
** operation.
**
** [[sqlite3_backup_step()]] sqlite3_backup_step()
**
-** ^Function sqlite3_backup_step(B,N) will copy up to N pages between
+** ^Function sqlite3_backup_step(B,N) will copy up to N pages between
** the source and destination databases specified by [sqlite3_backup] object B.
-** ^If N is negative, all remaining source pages are copied.
+** ^If N is negative, all remaining source pages are copied.
** ^If sqlite3_backup_step(B,N) successfully copies N pages and there
** are still more pages to be copied, then the function returns [SQLITE_OK].
** ^If sqlite3_backup_step(B,N) successfully finishes copying all pages
@@ -8486,8 +9518,8 @@ typedef struct sqlite3_backup sqlite3_backup;
**
** ^If sqlite3_backup_step() cannot obtain a required file-system lock, then
** the [sqlite3_busy_handler | busy-handler function]
-** is invoked (if one is specified). ^If the
-** busy-handler returns non-zero before the lock is available, then
+** is invoked (if one is specified). ^If the
+** busy-handler returns non-zero before the lock is available, then
** [SQLITE_BUSY] is returned to the caller. ^In this case the call to
** sqlite3_backup_step() can be retried later. ^If the source
** [database connection]
@@ -8495,15 +9527,15 @@ typedef struct sqlite3_backup sqlite3_backup;
** is called, then [SQLITE_LOCKED] is returned immediately. ^Again, in this
** case the call to sqlite3_backup_step() can be retried later on. ^(If
** [SQLITE_IOERR_ACCESS | SQLITE_IOERR_XXX], [SQLITE_NOMEM], or
-** [SQLITE_READONLY] is returned, then
-** there is no point in retrying the call to sqlite3_backup_step(). These
-** errors are considered fatal.)^ The application must accept
-** that the backup operation has failed and pass the backup operation handle
+** [SQLITE_READONLY] is returned, then
+** there is no point in retrying the call to sqlite3_backup_step(). These
+** errors are considered fatal.)^ The application must accept
+** that the backup operation has failed and pass the backup operation handle
** to the sqlite3_backup_finish() to release associated resources.
**
** ^The first call to sqlite3_backup_step() obtains an exclusive lock
-** on the destination file. ^The exclusive lock is not released until either
-** sqlite3_backup_finish() is called or the backup operation is complete
+** on the destination file. ^The exclusive lock is not released until either
+** sqlite3_backup_finish() is called or the backup operation is complete
** and sqlite3_backup_step() returns [SQLITE_DONE]. ^Every call to
** sqlite3_backup_step() obtains a [shared lock] on the source database that
** lasts for the duration of the sqlite3_backup_step() call.
@@ -8512,25 +9544,25 @@ typedef struct sqlite3_backup sqlite3_backup;
** through the backup process. ^If the source database is modified by an
** external process or via a database connection other than the one being
** used by the backup operation, then the backup will be automatically
-** restarted by the next call to sqlite3_backup_step(). ^If the source
-** database is modified by the using the same database connection as is used
+** restarted by the next call to sqlite3_backup_step(). ^If the source
+** database is modified by using the same database connection as is used
** by the backup operation, then the backup database is automatically
** updated at the same time.
**
** [[sqlite3_backup_finish()]] sqlite3_backup_finish()
**
-** When sqlite3_backup_step() has returned [SQLITE_DONE], or when the
+** When sqlite3_backup_step() has returned [SQLITE_DONE], or when the
** application wishes to abandon the backup operation, the application
** should destroy the [sqlite3_backup] by passing it to sqlite3_backup_finish().
** ^The sqlite3_backup_finish() interfaces releases all
-** resources associated with the [sqlite3_backup] object.
+** resources associated with the [sqlite3_backup] object.
** ^If sqlite3_backup_step() has not yet returned [SQLITE_DONE], then any
** active write-transaction on the destination database is rolled back.
** The [sqlite3_backup] object is invalid
** and may not be used following a call to sqlite3_backup_finish().
**
** ^The value returned by sqlite3_backup_finish is [SQLITE_OK] if no
-** sqlite3_backup_step() errors occurred, regardless or whether or not
+** sqlite3_backup_step() errors occurred, regardless of whether or not
** sqlite3_backup_step() completed.
** ^If an out-of-memory condition or IO error occurred during any prior
** sqlite3_backup_step() call on the same [sqlite3_backup] object, then
@@ -8563,28 +9595,38 @@ typedef struct sqlite3_backup sqlite3_backup;
** connections, then the source database connection may be used concurrently
** from within other threads.
**
-** However, the application must guarantee that the destination
-** [database connection] is not passed to any other API (by any thread) after
+** However, the application must guarantee that the destination
+** [database connection] is not passed to any other API (by any thread) after
** sqlite3_backup_init() is called and before the corresponding call to
** sqlite3_backup_finish(). SQLite does not currently check to see
** if the application incorrectly accesses the destination [database connection]
** and so no error code is reported, but the operations may malfunction
** nevertheless. Use of the destination database connection while a
-** backup is in progress might also also cause a mutex deadlock.
+** backup is in progress might also cause a mutex deadlock.
**
** If running in [shared cache mode], the application must
** guarantee that the shared cache used by the destination database
** is not accessed while the backup is running. In practice this means
-** that the application must guarantee that the disk file being
+** that the application must guarantee that the disk file being
** backed up to is not accessed by any connection within the process,
** not just the specific connection that was passed to sqlite3_backup_init().
**
-** The [sqlite3_backup] object itself is partially threadsafe. Multiple
+** The [sqlite3_backup] object itself is partially threadsafe. Multiple
** threads may safely make multiple concurrent calls to sqlite3_backup_step().
** However, the sqlite3_backup_remaining() and sqlite3_backup_pagecount()
** APIs are not strictly speaking threadsafe. If they are invoked at the
** same time as another thread is invoking sqlite3_backup_step() it is
** possible that they return invalid values.
+**
+** Alternatives To Using The Backup API
+**
+** Other techniques for safely creating a consistent backup of an SQLite
+** database include:
+**
+**
+** - The [VACUUM INTO] command.
+**
- The [sqlite3_rsync] utility program.
+**
*/
SQLITE_API sqlite3_backup *sqlite3_backup_init(
sqlite3 *pDest, /* Destination database handle */
@@ -8604,8 +9646,8 @@ SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p);
** ^When running in shared-cache mode, a database operation may fail with
** an [SQLITE_LOCKED] error if the required locks on the shared-cache or
** individual tables within the shared-cache cannot be obtained. See
-** [SQLite Shared-Cache Mode] for a description of shared-cache locking.
-** ^This API may be used to register a callback that SQLite will invoke
+** [SQLite Shared-Cache Mode] for a description of shared-cache locking.
+** ^This API may be used to register a callback that SQLite will invoke
** when the connection currently holding the required lock relinquishes it.
** ^This API is only available if the library was compiled with the
** [SQLITE_ENABLE_UNLOCK_NOTIFY] C-preprocessor symbol defined.
@@ -8613,16 +9655,16 @@ SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p);
** See Also: [Using the SQLite Unlock Notification Feature].
**
** ^Shared-cache locks are released when a database connection concludes
-** its current transaction, either by committing it or rolling it back.
+** its current transaction, either by committing it or rolling it back.
**
** ^When a connection (known as the blocked connection) fails to obtain a
** shared-cache lock and SQLITE_LOCKED is returned to the caller, the
** identity of the database connection (the blocking connection) that
-** has locked the required resource is stored internally. ^After an
+** has locked the required resource is stored internally. ^After an
** application receives an SQLITE_LOCKED error, it may call the
-** sqlite3_unlock_notify() method with the blocked connection handle as
+** sqlite3_unlock_notify() method with the blocked connection handle as
** the first argument to register for a callback that will be invoked
-** when the blocking connections current transaction is concluded. ^The
+** when the blocking connection's current transaction is concluded. ^The
** callback is invoked from within the [sqlite3_step] or [sqlite3_close]
** call that concludes the blocking connection's transaction.
**
@@ -8634,15 +9676,15 @@ SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p);
**
** ^If the blocked connection is attempting to obtain a write-lock on a
** shared-cache table, and more than one other connection currently holds
-** a read-lock on the same table, then SQLite arbitrarily selects one of
+** a read-lock on the same table, then SQLite arbitrarily selects one of
** the other connections to use as the blocking connection.
**
-** ^(There may be at most one unlock-notify callback registered by a
+** ^(There may be at most one unlock-notify callback registered by a
** blocked connection. If sqlite3_unlock_notify() is called when the
** blocked connection already has a registered unlock-notify callback,
** then the new callback replaces the old.)^ ^If sqlite3_unlock_notify() is
** called with a NULL pointer as its second argument, then any existing
-** unlock-notify callback is canceled. ^The blocked connections
+** unlock-notify callback is canceled. ^The blocked connection's
** unlock-notify callback may also be canceled by closing the blocked
** connection using [sqlite3_close()].
**
@@ -8655,7 +9697,7 @@ SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p);
**
** Callback Invocation Details
**
-** When an unlock-notify callback is registered, the application provides a
+** When an unlock-notify callback is registered, the application provides a
** single void* pointer that is passed to the callback when it is invoked.
** However, the signature of the callback function allows SQLite to pass
** it an array of void* context pointers. The first argument passed to
@@ -8668,12 +9710,12 @@ SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p);
** same callback function, then instead of invoking the callback function
** multiple times, it is invoked once with the set of void* context pointers
** specified by the blocked connections bundled together into an array.
-** This gives the application an opportunity to prioritize any actions
+** This gives the application an opportunity to prioritize any actions
** related to the set of unblocked database connections.
**
** Deadlock Detection
**
-** Assuming that after registering for an unlock-notify callback a
+** Assuming that after registering for an unlock-notify callback a
** database waits for the callback to be issued before taking any further
** action (a reasonable assumption), then using this API may cause the
** application to deadlock. For example, if connection X is waiting for
@@ -8696,7 +9738,7 @@ SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p);
**
** The "DROP TABLE" Exception
**
-** When a call to [sqlite3_step()] returns SQLITE_LOCKED, it is almost
+** When a call to [sqlite3_step()] returns SQLITE_LOCKED, it is almost
** always appropriate to call sqlite3_unlock_notify(). There is however,
** one exception. When executing a "DROP TABLE" or "DROP INDEX" statement,
** SQLite checks if there are any currently executing SELECT statements
@@ -8709,7 +9751,7 @@ SQLITE_API int sqlite3_backup_pagecount(sqlite3_backup *p);
** One way around this problem is to check the extended error code returned
** by an sqlite3_step() call. ^(If there is a blocking connection, then the
** extended error code is set to SQLITE_LOCKED_SHAREDCACHE. Otherwise, in
-** the special "DROP TABLE/INDEX" case, the extended error code is just
+** the special "DROP TABLE/INDEX" case, the extended error code is just
** SQLITE_LOCKED.)^
*/
SQLITE_API int sqlite3_unlock_notify(
@@ -8800,8 +9842,8 @@ SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...);
** ^The [sqlite3_wal_hook()] function is used to register a callback that
** is invoked each time data is committed to a database in wal mode.
**
-** ^(The callback is invoked by SQLite after the commit has taken place and
-** the associated write-lock on the database released)^, so the implementation
+** ^(The callback is invoked by SQLite after the commit has taken place and
+** the associated write-lock on the database released)^, so the implementation
** may read, write or [checkpoint] the database as required.
**
** ^The first parameter passed to the callback function when it is invoked
@@ -8812,7 +9854,7 @@ SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...);
** is the number of pages currently in the write-ahead log file,
** including those that were just committed.
**
-** The callback function should normally return [SQLITE_OK]. ^If an error
+** ^The callback function should normally return [SQLITE_OK]. ^If an error
** code is returned, that error will propagate back up through the
** SQLite code base to cause the statement that provoked the callback
** to report an error, though the commit will have still occurred. If the
@@ -8820,15 +9862,29 @@ SQLITE_API void sqlite3_log(int iErrCode, const char *zFormat, ...);
** that does not correspond to any valid SQLite error code, the results
** are undefined.
**
-** A single database handle may have at most a single write-ahead log callback
-** registered at one time. ^Calling [sqlite3_wal_hook()] replaces any
-** previously registered write-ahead log callback. ^Note that the
-** [sqlite3_wal_autocheckpoint()] interface and the
-** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and will
-** overwrite any prior [sqlite3_wal_hook()] settings.
+** ^A single database handle may have at most a single write-ahead log
+** callback registered at one time. ^Calling [sqlite3_wal_hook()]
+** replaces the default behavior or previously registered write-ahead
+** log callback.
+**
+** ^The return value is a copy of the third parameter from the
+** previous call, if any, or 0.
+**
+** ^The [sqlite3_wal_autocheckpoint()] interface and the
+** [wal_autocheckpoint pragma] both invoke [sqlite3_wal_hook()] and
+** will overwrite any prior [sqlite3_wal_hook()] settings.
+**
+** ^If a write-ahead log callback is set using this function then
+** [sqlite3_wal_checkpoint_v2()] or [PRAGMA wal_checkpoint]
+** should be invoked periodically to keep the write-ahead log file
+** from growing without bound.
+**
+** ^Passing a NULL pointer for the callback disables automatic
+** checkpointing entirely. To re-enable the default behavior, call
+** sqlite3_wal_autocheckpoint(db,1000) or use [PRAGMA wal_checkpoint].
*/
SQLITE_API void *sqlite3_wal_hook(
- sqlite3*,
+ sqlite3*,
int(*)(void *,sqlite3*,const char*,int),
void*
);
@@ -8841,8 +9897,8 @@ SQLITE_API void *sqlite3_wal_hook(
** [sqlite3_wal_hook()] that causes any database on [database connection] D
** to automatically [checkpoint]
** after committing a transaction if there are N or
-** more frames in the [write-ahead log] file. ^Passing zero or
-** a negative value as the nFrame parameter disables automatic
+** more frames in the [write-ahead log] file. ^Passing zero or
+** a negative value as the N parameter disables automatic
** checkpoints entirely.
**
** ^The callback registered by this function replaces any existing callback
@@ -8858,9 +9914,10 @@ SQLITE_API void *sqlite3_wal_hook(
**
** ^Every new [database connection] defaults to having the auto-checkpoint
** enabled with a threshold of 1000 or [SQLITE_DEFAULT_WAL_AUTOCHECKPOINT]
-** pages. The use of this interface
-** is only necessary if the default setting is found to be suboptimal
-** for a particular application.
+** pages.
+**
+** ^The use of this interface is only necessary if the default setting
+** is found to be suboptimal for a particular application.
*/
SQLITE_API int sqlite3_wal_autocheckpoint(sqlite3 *db, int N);
@@ -8871,7 +9928,7 @@ SQLITE_API int sqlite3_wal_autocheckpoint(sqlite3 *db, int N);
** ^(The sqlite3_wal_checkpoint(D,X) is equivalent to
** [sqlite3_wal_checkpoint_v2](D,X,[SQLITE_CHECKPOINT_PASSIVE],0,0).)^
**
-** In brief, sqlite3_wal_checkpoint(D,X) causes the content in the
+** In brief, sqlite3_wal_checkpoint(D,X) causes the content in the
** [write-ahead log] for database X on [database connection] D to be
** transferred into the database file and for the write-ahead log to
** be reset. See the [checkpointing] documentation for addition
@@ -8897,10 +9954,10 @@ SQLITE_API int sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb);
**
**
** - SQLITE_CHECKPOINT_PASSIVE
-
-** ^Checkpoint as many frames as possible without waiting for any database
-** readers or writers to finish, then sync the database file if all frames
+** ^Checkpoint as many frames as possible without waiting for any database
+** readers or writers to finish, then sync the database file if all frames
** in the log were checkpointed. ^The [busy-handler callback]
-** is never invoked in the SQLITE_CHECKPOINT_PASSIVE mode.
+** is never invoked in the SQLITE_CHECKPOINT_PASSIVE mode.
** ^On the other hand, passive mode might leave the checkpoint unfinished
** if there are concurrent readers or writers.
**
@@ -8914,9 +9971,9 @@ SQLITE_API int sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb);
**
**
- SQLITE_CHECKPOINT_RESTART
-
** ^This mode works the same way as SQLITE_CHECKPOINT_FULL with the addition
-** that after checkpointing the log file it blocks (calls the
+** that after checkpointing the log file it blocks (calls the
** [busy-handler callback])
-** until all readers are reading from the database file only. ^This ensures
+** until all readers are reading from the database file only. ^This ensures
** that the next writer will restart the log file from the beginning.
** ^Like SQLITE_CHECKPOINT_FULL, this mode blocks new
** database writer attempts while it is pending, but does not impede readers.
@@ -8925,6 +9982,11 @@ SQLITE_API int sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb);
** ^This mode works the same way as SQLITE_CHECKPOINT_RESTART with the
** addition that it also truncates the log file to zero bytes just prior
** to a successful return.
+**
+**
- SQLITE_CHECKPOINT_NOOP
-
+** ^This mode always checkpoints zero frames. The only reason to invoke
+** a NOOP checkpoint is to access the values returned by
+** sqlite3_wal_checkpoint_v2() via output parameters *pnLog and *pnCkpt.
**
**
** ^If pnLog is not NULL, then *pnLog is set to the total number of frames in
@@ -8938,31 +10000,31 @@ SQLITE_API int sqlite3_wal_checkpoint(sqlite3 *db, const char *zDb);
** truncated to zero bytes and so both *pnLog and *pnCkpt will be set to zero.
**
** ^All calls obtain an exclusive "checkpoint" lock on the database file. ^If
-** any other process is running a checkpoint operation at the same time, the
-** lock cannot be obtained and SQLITE_BUSY is returned. ^Even if there is a
+** any other process is running a checkpoint operation at the same time, the
+** lock cannot be obtained and SQLITE_BUSY is returned. ^Even if there is a
** busy-handler configured, it will not be invoked in this case.
**
-** ^The SQLITE_CHECKPOINT_FULL, RESTART and TRUNCATE modes also obtain the
+** ^The SQLITE_CHECKPOINT_FULL, RESTART and TRUNCATE modes also obtain the
** exclusive "writer" lock on the database file. ^If the writer lock cannot be
** obtained immediately, and a busy-handler is configured, it is invoked and
** the writer lock retried until either the busy-handler returns 0 or the lock
** is successfully obtained. ^The busy-handler is also invoked while waiting for
** database readers as described above. ^If the busy-handler returns 0 before
** the writer lock is obtained or while waiting for database readers, the
-** checkpoint operation proceeds from that point in the same way as
-** SQLITE_CHECKPOINT_PASSIVE - checkpointing as many frames as possible
+** checkpoint operation proceeds from that point in the same way as
+** SQLITE_CHECKPOINT_PASSIVE - checkpointing as many frames as possible
** without blocking any further. ^SQLITE_BUSY is returned in this case.
**
** ^If parameter zDb is NULL or points to a zero length string, then the
-** specified operation is attempted on all WAL databases [attached] to
+** specified operation is attempted on all WAL databases [attached] to
** [database connection] db. In this case the
-** values written to output parameters *pnLog and *pnCkpt are undefined. ^If
-** an SQLITE_BUSY error is encountered when processing one or more of the
-** attached WAL databases, the operation is still attempted on any remaining
-** attached databases and SQLITE_BUSY is returned at the end. ^If any other
-** error occurs while processing an attached database, processing is abandoned
-** and the error code is returned to the caller immediately. ^If no error
-** (SQLITE_BUSY or otherwise) is encountered while processing the attached
+** values written to output parameters *pnLog and *pnCkpt are undefined. ^If
+** an SQLITE_BUSY error is encountered when processing one or more of the
+** attached WAL databases, the operation is still attempted on any remaining
+** attached databases and SQLITE_BUSY is returned at the end. ^If any other
+** error occurs while processing an attached database, processing is abandoned
+** and the error code is returned to the caller immediately. ^If no error
+** (SQLITE_BUSY or otherwise) is encountered while processing the attached
** databases, SQLITE_OK is returned.
**
** ^If database zDb is the name of an attached database that is not in WAL
@@ -8995,9 +10057,10 @@ SQLITE_API int sqlite3_wal_checkpoint_v2(
** See the [sqlite3_wal_checkpoint_v2()] documentation for details on the
** meaning of each of these checkpoint modes.
*/
+#define SQLITE_CHECKPOINT_NOOP -1 /* Do no work at all */
#define SQLITE_CHECKPOINT_PASSIVE 0 /* Do as much as possible w/o blocking */
#define SQLITE_CHECKPOINT_FULL 1 /* Wait for writers, then checkpoint */
-#define SQLITE_CHECKPOINT_RESTART 2 /* Like FULL but wait for for readers */
+#define SQLITE_CHECKPOINT_RESTART 2 /* Like FULL but wait for readers */
#define SQLITE_CHECKPOINT_TRUNCATE 3 /* Like RESTART but also truncate WAL */
/*
@@ -9022,7 +10085,7 @@ SQLITE_API int sqlite3_vtab_config(sqlite3*, int op, ...);
/*
** CAPI3REF: Virtual Table Configuration Options
-** KEYWORDS: {virtual table configuration options}
+** KEYWORDS: {virtual table configuration options}
** KEYWORDS: {virtual table configuration option}
**
** These macros define the various options to the
@@ -9039,33 +10102,33 @@ SQLITE_API int sqlite3_vtab_config(sqlite3*, int op, ...);
** support constraints. In this configuration (which is the default) if
** a call to the [xUpdate] method returns [SQLITE_CONSTRAINT], then the entire
** statement is rolled back as if [ON CONFLICT | OR ABORT] had been
-** specified as part of the users SQL statement, regardless of the actual
+** specified as part of the user's SQL statement, regardless of the actual
** ON CONFLICT mode specified.
**
** If X is non-zero, then the virtual table implementation guarantees
** that if [xUpdate] returns [SQLITE_CONSTRAINT], it will do so before
** any modifications to internal or persistent data structures have been made.
-** If the [ON CONFLICT] mode is ABORT, FAIL, IGNORE or ROLLBACK, SQLite
+** If the [ON CONFLICT] mode is ABORT, FAIL, IGNORE or ROLLBACK, SQLite
** is able to roll back a statement or database transaction, and abandon
-** or continue processing the current SQL statement as appropriate.
+** or continue processing the current SQL statement as appropriate.
** If the ON CONFLICT mode is REPLACE and the [xUpdate] method returns
** [SQLITE_CONSTRAINT], SQLite handles this as if the ON CONFLICT mode
** had been ABORT.
**
** Virtual table implementations that are required to handle OR REPLACE
-** must do so within the [xUpdate] method. If a call to the
-** [sqlite3_vtab_on_conflict()] function indicates that the current ON
-** CONFLICT policy is REPLACE, the virtual table implementation should
+** must do so within the [xUpdate] method. If a call to the
+** [sqlite3_vtab_on_conflict()] function indicates that the current ON
+** CONFLICT policy is REPLACE, the virtual table implementation should
** silently replace the appropriate rows within the xUpdate callback and
** return SQLITE_OK. Or, if this is not possible, it may return
-** SQLITE_CONSTRAINT, in which case SQLite falls back to OR ABORT
+** SQLITE_CONSTRAINT, in which case SQLite falls back to OR ABORT
** constraint handling.
**
**
** [[SQLITE_VTAB_DIRECTONLY]]SQLITE_VTAB_DIRECTONLY
** Calls of the form
** [sqlite3_vtab_config](db,SQLITE_VTAB_DIRECTONLY) from within the
-** the [xConnect] or [xCreate] methods of a [virtual table] implmentation
+** the [xConnect] or [xCreate] methods of a [virtual table] implementation
** prohibits that virtual table from being used from within triggers and
** views.
**
@@ -9073,18 +10136,28 @@ SQLITE_API int sqlite3_vtab_config(sqlite3*, int op, ...);
** [[SQLITE_VTAB_INNOCUOUS]]SQLITE_VTAB_INNOCUOUS
** Calls of the form
** [sqlite3_vtab_config](db,SQLITE_VTAB_INNOCUOUS) from within the
-** the [xConnect] or [xCreate] methods of a [virtual table] implmentation
+** [xConnect] or [xCreate] methods of a [virtual table] implementation
** identify that virtual table as being safe to use from within triggers
** and views. Conceptually, the SQLITE_VTAB_INNOCUOUS tag means that the
** virtual table can do no serious harm even if it is controlled by a
** malicious hacker. Developers should avoid setting the SQLITE_VTAB_INNOCUOUS
** flag unless absolutely necessary.
**
+**
+** [[SQLITE_VTAB_USES_ALL_SCHEMAS]]SQLITE_VTAB_USES_ALL_SCHEMAS
+** Calls of the form
+** [sqlite3_vtab_config](db,SQLITE_VTAB_USES_ALL_SCHEMA) from within the
+** the [xConnect] or [xCreate] methods of a [virtual table] implementation
+** instruct the query planner to begin at least a read transaction on
+** all schemas ("main", "temp", and any ATTACH-ed databases) whenever the
+** virtual table is used.
+**
**
*/
#define SQLITE_VTAB_CONSTRAINT_SUPPORT 1
#define SQLITE_VTAB_INNOCUOUS 2
#define SQLITE_VTAB_DIRECTONLY 3
+#define SQLITE_VTAB_USES_ALL_SCHEMAS 4
/*
** CAPI3REF: Determine The Virtual Table Conflict Policy
@@ -9102,10 +10175,11 @@ SQLITE_API int sqlite3_vtab_on_conflict(sqlite3 *);
** CAPI3REF: Determine If Virtual Table Column Access Is For UPDATE
**
** If the sqlite3_vtab_nochange(X) routine is called within the [xColumn]
-** method of a [virtual table], then it returns true if and only if the
+** method of a [virtual table], then it might return true if the
** column is being fetched as part of an UPDATE operation during which the
-** column value will not change. Applications might use this to substitute
-** a return value that is less expensive to compute and that the corresponding
+** column value will not change. The virtual table implementation can use
+** this hint as permission to substitute a return value that is less
+** expensive to compute and that the corresponding
** [xUpdate] method understands as a "no-change" value.
**
** If the [xColumn] method calls sqlite3_vtab_nochange() and finds that
@@ -9114,31 +10188,314 @@ SQLITE_API int sqlite3_vtab_on_conflict(sqlite3 *);
** any of the [sqlite3_result_int|sqlite3_result_xxxxx() interfaces].
** In that case, [sqlite3_value_nochange(X)] will return true for the
** same column in the [xUpdate] method.
+**
+** The sqlite3_vtab_nochange() routine is an optimization. Virtual table
+** implementations should continue to give a correct answer even if the
+** sqlite3_vtab_nochange() interface were to always return false. In the
+** current implementation, the sqlite3_vtab_nochange() interface does always
+** returns false for the enhanced [UPDATE FROM] statement.
*/
SQLITE_API int sqlite3_vtab_nochange(sqlite3_context*);
/*
** CAPI3REF: Determine The Collation For a Virtual Table Constraint
+** METHOD: sqlite3_index_info
**
** This function may only be called from within a call to the [xBestIndex]
-** method of a [virtual table].
+** method of a [virtual table]. This function returns a pointer to a string
+** that is the name of the appropriate collation sequence to use for text
+** comparisons on the constraint identified by its arguments.
+**
+** The first argument must be the pointer to the [sqlite3_index_info] object
+** that is the first parameter to the xBestIndex() method. The second argument
+** must be an index into the aConstraint[] array belonging to the
+** sqlite3_index_info structure passed to xBestIndex.
+**
+** Important:
+** The first parameter must be the same pointer that is passed into the
+** xBestMethod() method. The first parameter may not be a pointer to a
+** different [sqlite3_index_info] object, even an exact copy.
**
-** The first argument must be the sqlite3_index_info object that is the
-** first parameter to the xBestIndex() method. The second argument must be
-** an index into the aConstraint[] array belonging to the sqlite3_index_info
-** structure passed to xBestIndex. This function returns a pointer to a buffer
-** containing the name of the collation sequence for the corresponding
-** constraint.
+** The return value is computed as follows:
+**
+**
+** If the constraint comes from a WHERE clause expression that contains
+** a [COLLATE operator], then the name of the collation specified by
+** that COLLATE operator is returned.
+** If there is no COLLATE operator, but the column that is the subject
+** of the constraint specifies an alternative collating sequence via
+** a [COLLATE clause] on the column definition within the CREATE TABLE
+** statement that was passed into [sqlite3_declare_vtab()], then the
+** name of that alternative collating sequence is returned.
+** Otherwise, "BINARY" is returned.
+**
*/
-SQLITE_API SQLITE_EXPERIMENTAL const char *sqlite3_vtab_collation(sqlite3_index_info*,int);
+SQLITE_API const char *sqlite3_vtab_collation(sqlite3_index_info*,int);
+
+/*
+** CAPI3REF: Determine if a virtual table query is DISTINCT
+** METHOD: sqlite3_index_info
+**
+** This API may only be used from within an [xBestIndex|xBestIndex method]
+** of a [virtual table] implementation. The result of calling this
+** interface from outside of xBestIndex() is undefined and probably harmful.
+**
+** ^The sqlite3_vtab_distinct() interface returns an integer between 0 and
+** 3. The integer returned by sqlite3_vtab_distinct()
+** gives the virtual table additional information about how the query
+** planner wants the output to be ordered. As long as the virtual table
+** can meet the ordering requirements of the query planner, it may set
+** the "orderByConsumed" flag.
+**
+**
+** ^If the sqlite3_vtab_distinct() interface returns 0, that means
+** that the query planner needs the virtual table to return all rows in the
+** sort order defined by the "nOrderBy" and "aOrderBy" fields of the
+** [sqlite3_index_info] object. This is the default expectation. If the
+** virtual table outputs all rows in sorted order, then it is always safe for
+** the xBestIndex method to set the "orderByConsumed" flag, regardless of
+** the return value from sqlite3_vtab_distinct().
+**
+** ^(If the sqlite3_vtab_distinct() interface returns 1, that means
+** that the query planner does not need the rows to be returned in sorted order
+** as long as all rows with the same values in all columns identified by the
+** "aOrderBy" field are adjacent.)^ This mode is used when the query planner
+** is doing a GROUP BY.
+**
+** ^(If the sqlite3_vtab_distinct() interface returns 2, that means
+** that the query planner does not need the rows returned in any particular
+** order, as long as rows with the same values in all columns identified
+** by "aOrderBy" are adjacent.)^ ^(Furthermore, when two or more rows
+** contain the same values for all columns identified by "colUsed", all but
+** one such row may optionally be omitted from the result.)^
+** The virtual table is not required to omit rows that are duplicates
+** over the "colUsed" columns, but if the virtual table can do that without
+** too much extra effort, it could potentially help the query to run faster.
+** This mode is used for a DISTINCT query.
+**
+** ^(If the sqlite3_vtab_distinct() interface returns 3, that means the
+** virtual table must return rows in the order defined by "aOrderBy" as
+** if the sqlite3_vtab_distinct() interface had returned 0. However if
+** two or more rows in the result have the same values for all columns
+** identified by "colUsed", then all but one such row may optionally be
+** omitted.)^ Like when the return value is 2, the virtual table
+** is not required to omit rows that are duplicates over the "colUsed"
+** columns, but if the virtual table can do that without
+** too much extra effort, it could potentially help the query to run faster.
+** This mode is used for queries
+** that have both DISTINCT and ORDER BY clauses.
+**
+**
+** The following table summarizes the conditions under which the
+** virtual table is allowed to set the "orderByConsumed" flag based on
+** the value returned by sqlite3_vtab_distinct(). This table is a
+** restatement of the previous four paragraphs:
+**
+**
+**
+** | sqlite3_vtab_distinct() return value
+** | Rows are returned in aOrderBy order
+** | Rows with the same value in all aOrderBy columns are adjacent
+** | Duplicates over all colUsed columns may be omitted
+** | | 0 | yes | yes | no
+** | | 1 | no | yes | no
+** | | 2 | no | yes | yes
+** | | 3 | yes | yes | yes
+** |
+**
+** ^For the purposes of comparing virtual table output values to see if the
+** values are the same value for sorting purposes, two NULL values are considered
+** to be the same. In other words, the comparison operator is "IS"
+** (or "IS NOT DISTINCT FROM") and not "==".
+**
+** If a virtual table implementation is unable to meet the requirements
+** specified above, then it must not set the "orderByConsumed" flag in the
+** [sqlite3_index_info] object or an incorrect answer may result.
+**
+** ^A virtual table implementation is always free to return rows in any order
+** it wants, as long as the "orderByConsumed" flag is not set. ^When the
+** "orderByConsumed" flag is unset, the query planner will add extra
+** [bytecode] to ensure that the final results returned by the SQL query are
+** ordered correctly. The use of the "orderByConsumed" flag and the
+** sqlite3_vtab_distinct() interface is merely an optimization. ^Careful
+** use of the sqlite3_vtab_distinct() interface and the "orderByConsumed"
+** flag might help queries against a virtual table to run faster. Being
+** overly aggressive and setting the "orderByConsumed" flag when it is not
+** valid to do so, on the other hand, might cause SQLite to return incorrect
+** results.
+*/
+SQLITE_API int sqlite3_vtab_distinct(sqlite3_index_info*);
+
+/*
+** CAPI3REF: Identify and handle IN constraints in xBestIndex
+**
+** This interface may only be used from within an
+** [xBestIndex|xBestIndex() method] of a [virtual table] implementation.
+** The result of invoking this interface from any other context is
+** undefined and probably harmful.
+**
+** ^(A constraint on a virtual table of the form
+** "[IN operator|column IN (...)]" is
+** communicated to the xBestIndex method as a
+** [SQLITE_INDEX_CONSTRAINT_EQ] constraint.)^ If xBestIndex wants to use
+** this constraint, it must set the corresponding
+** aConstraintUsage[].argvIndex to a positive integer. ^(Then, under
+** the usual mode of handling IN operators, SQLite generates [bytecode]
+** that invokes the [xFilter|xFilter() method] once for each value
+** on the right-hand side of the IN operator.)^ Thus the virtual table
+** only sees a single value from the right-hand side of the IN operator
+** at a time.
+**
+** In some cases, however, it would be advantageous for the virtual
+** table to see all values on the right-hand of the IN operator all at
+** once. The sqlite3_vtab_in() interfaces facilitates this in two ways:
+**
+**
+**
+** ^A call to sqlite3_vtab_in(P,N,-1) will return true (non-zero)
+** if and only if the [sqlite3_index_info|P->aConstraint][N] constraint
+** is an [IN operator] that can be processed all at once. ^In other words,
+** sqlite3_vtab_in() with -1 in the third argument is a mechanism
+** by which the virtual table can ask SQLite if all-at-once processing
+** of the IN operator is even possible.
+**
+**
+** ^A call to sqlite3_vtab_in(P,N,F) with F==1 or F==0 indicates
+** to SQLite that the virtual table does or does not want to process
+** the IN operator all-at-once, respectively. ^Thus when the third
+** parameter (F) is non-negative, this interface is the mechanism by
+** which the virtual table tells SQLite how it wants to process the
+** IN operator.
+**
+**
+** ^The sqlite3_vtab_in(P,N,F) interface can be invoked multiple times
+** within the same xBestIndex method call. ^For any given P,N pair,
+** the return value from sqlite3_vtab_in(P,N,F) will always be the same
+** within the same xBestIndex call. ^If the interface returns true
+** (non-zero), that means that the constraint is an IN operator
+** that can be processed all-at-once. ^If the constraint is not an IN
+** operator or cannot be processed all-at-once, then the interface returns
+** false.
+**
+** ^(All-at-once processing of the IN operator is selected if both of the
+** following conditions are met:
+**
+**
+** The P->aConstraintUsage[N].argvIndex value is set to a positive
+** integer. This is how the virtual table tells SQLite that it wants to
+** use the N-th constraint.
+**
+** The last call to sqlite3_vtab_in(P,N,F) for which F was
+** non-negative had F>=1.
+** )^
+**
+** ^If either or both of the conditions above are false, then SQLite uses
+** the traditional one-at-a-time processing strategy for the IN constraint.
+** ^If both conditions are true, then the argvIndex-th parameter to the
+** xFilter method will be an [sqlite3_value] that appears to be NULL,
+** but which can be passed to [sqlite3_vtab_in_first()] and
+** [sqlite3_vtab_in_next()] to find all values on the right-hand side
+** of the IN constraint.
+*/
+SQLITE_API int sqlite3_vtab_in(sqlite3_index_info*, int iCons, int bHandle);
+
+/*
+** CAPI3REF: Find all elements on the right-hand side of an IN constraint.
+**
+** These interfaces are only useful from within the
+** [xFilter|xFilter() method] of a [virtual table] implementation.
+** The result of invoking these interfaces from any other context
+** is undefined and probably harmful.
+**
+** The X parameter in a call to sqlite3_vtab_in_first(X,P) or
+** sqlite3_vtab_in_next(X,P) should be one of the parameters to the
+** xFilter method which invokes these routines, and specifically
+** a parameter that was previously selected for all-at-once IN constraint
+** processing using the [sqlite3_vtab_in()] interface in the
+** [xBestIndex|xBestIndex method]. ^(If the X parameter is not
+** an xFilter argument that was selected for all-at-once IN constraint
+** processing, then these routines return [SQLITE_ERROR].)^
+**
+** ^(Use these routines to access all values on the right-hand side
+** of the IN constraint using code like the following:
+**
+**
+** for(rc=sqlite3_vtab_in_first(pList, &pVal);
+** rc==SQLITE_OK && pVal;
+** rc=sqlite3_vtab_in_next(pList, &pVal)
+** ){
+** // do something with pVal
+** }
+** if( rc!=SQLITE_DONE ){
+** // an error has occurred
+** }
+** )^
+**
+** ^On success, the sqlite3_vtab_in_first(X,P) and sqlite3_vtab_in_next(X,P)
+** routines return SQLITE_OK and set *P to point to the first or next value
+** on the RHS of the IN constraint. ^If there are no more values on the
+** right hand side of the IN constraint, then *P is set to NULL and these
+** routines return [SQLITE_DONE]. ^The return value might be
+** some other value, such as SQLITE_NOMEM, in the event of a malfunction.
+**
+** The *ppOut values returned by these routines are only valid until the
+** next call to either of these routines or until the end of the xFilter
+** method from which these routines were called. If the virtual table
+** implementation needs to retain the *ppOut values for longer, it must make
+** copies. The *ppOut values are [protected sqlite3_value|protected].
+*/
+SQLITE_API int sqlite3_vtab_in_first(sqlite3_value *pVal, sqlite3_value **ppOut);
+SQLITE_API int sqlite3_vtab_in_next(sqlite3_value *pVal, sqlite3_value **ppOut);
+
+/*
+** CAPI3REF: Constraint values in xBestIndex()
+** METHOD: sqlite3_index_info
+**
+** This API may only be used from within the [xBestIndex|xBestIndex method]
+** of a [virtual table] implementation. The result of calling this interface
+** from outside of an xBestIndex method are undefined and probably harmful.
+**
+** ^When the sqlite3_vtab_rhs_value(P,J,V) interface is invoked from within
+** the [xBestIndex] method of a [virtual table] implementation, with P being
+** a copy of the [sqlite3_index_info] object pointer passed into xBestIndex and
+** J being a 0-based index into P->aConstraint[], then this routine
+** attempts to set *V to the value of the right-hand operand of
+** that constraint if the right-hand operand is known. ^If the
+** right-hand operand is not known, then *V is set to a NULL pointer.
+** ^The sqlite3_vtab_rhs_value(P,J,V) interface returns SQLITE_OK if
+** and only if *V is set to a value. ^The sqlite3_vtab_rhs_value(P,J,V)
+** inteface returns SQLITE_NOTFOUND if the right-hand side of the J-th
+** constraint is not available. ^The sqlite3_vtab_rhs_value() interface
+** can return a result code other than SQLITE_OK or SQLITE_NOTFOUND if
+** something goes wrong.
+**
+** The sqlite3_vtab_rhs_value() interface is usually only successful if
+** the right-hand operand of a constraint is a literal value in the original
+** SQL statement. If the right-hand operand is an expression or a reference
+** to some other column or a [host parameter], then sqlite3_vtab_rhs_value()
+** will probably return [SQLITE_NOTFOUND].
+**
+** ^(Some constraints, such as [SQLITE_INDEX_CONSTRAINT_ISNULL] and
+** [SQLITE_INDEX_CONSTRAINT_ISNOTNULL], have no right-hand operand. For such
+** constraints, sqlite3_vtab_rhs_value() always returns SQLITE_NOTFOUND.)^
+**
+** ^The [sqlite3_value] object returned in *V is a protected sqlite3_value
+** and remains valid for the duration of the xBestIndex method call.
+** ^When xBestIndex returns, the sqlite3_value object returned by
+** sqlite3_vtab_rhs_value() is automatically deallocated.
+**
+** The "_rhs_" in the name of this routine is an abbreviation for
+** "Right-Hand Side".
+*/
+SQLITE_API int sqlite3_vtab_rhs_value(sqlite3_index_info*, int, sqlite3_value **ppVal);
/*
** CAPI3REF: Conflict resolution modes
** KEYWORDS: {conflict resolution mode}
**
** These constants are returned by [sqlite3_vtab_on_conflict()] to
-** inform a [virtual table] implementation what the [ON CONFLICT] mode
-** is for the SQL statement being evaluated.
+** inform a [virtual table] implementation of the [ON CONFLICT] mode
+** for the SQL statement being evaluated.
**
** Note that the [SQLITE_IGNORE] constant is also used as a potential
** return value from the [sqlite3_set_authorizer()] callback and that
@@ -9162,6 +10519,10 @@ SQLITE_API SQLITE_EXPERIMENTAL const char *sqlite3_vtab_collation(sqlite3_index_
** managed by the prepared statement S and will be automatically freed when
** S is finalized.
**
+** Not all values are available for all query elements. When a value is
+** not available, the output variable is set to -1 if the value is numeric,
+** or to NULL if it is a string (SQLITE_SCANSTAT_NAME).
+**
**
** [[SQLITE_SCANSTAT_NLOOP]] - SQLITE_SCANSTAT_NLOOP
** - ^The [sqlite3_int64] variable pointed to by the V parameter will be
@@ -9174,27 +10535,39 @@ SQLITE_API SQLITE_EXPERIMENTAL const char *sqlite3_vtab_collation(sqlite3_index_
** [[SQLITE_SCANSTAT_EST]]
- SQLITE_SCANSTAT_EST
** - ^The "double" variable pointed to by the V parameter will be set to the
** query planner's estimate for the average number of rows output from each
-** iteration of the X-th loop. If the query planner's estimates was accurate,
+** iteration of the X-th loop. If the query planner's estimate was accurate,
** then this value will approximate the quotient NVISIT/NLOOP and the
** product of this value for all prior loops with the same SELECTID will
-** be the NLOOP value for the current loop.
+** be the NLOOP value for the current loop.
**
** [[SQLITE_SCANSTAT_NAME]] - SQLITE_SCANSTAT_NAME
** - ^The "const char *" variable pointed to by the V parameter will be set
** to a zero-terminated UTF-8 string containing the name of the index or table
-** used for the X-th loop.
+** used for the X-th loop.
**
** [[SQLITE_SCANSTAT_EXPLAIN]] - SQLITE_SCANSTAT_EXPLAIN
** - ^The "const char *" variable pointed to by the V parameter will be set
** to a zero-terminated UTF-8 string containing the [EXPLAIN QUERY PLAN]
-** description for the X-th loop.
+** description for the X-th loop.
**
-** [[SQLITE_SCANSTAT_SELECTID]] - SQLITE_SCANSTAT_SELECT
+** [[SQLITE_SCANSTAT_SELECTID]] - SQLITE_SCANSTAT_SELECTID
** - ^The "int" variable pointed to by the V parameter will be set to the
-** "select-id" for the X-th loop. The select-id identifies which query or
-** subquery the loop is part of. The main query has a select-id of zero.
-** The select-id is the same value as is output in the first column
-** of an [EXPLAIN QUERY PLAN] query.
+** id for the X-th query plan element. The id value is unique within the
+** statement. The select-id is the same value as is output in the first
+** column of an [EXPLAIN QUERY PLAN] query.
+**
+** [[SQLITE_SCANSTAT_PARENTID]] - SQLITE_SCANSTAT_PARENTID
+** - The "int" variable pointed to by the V parameter will be set to the
+** id of the parent of the current query element, if applicable, or
+** to zero if the query element has no parent. This is the same value as
+** returned in the second column of an [EXPLAIN QUERY PLAN] query.
+**
+** [[SQLITE_SCANSTAT_NCYCLE]] - SQLITE_SCANSTAT_NCYCLE
+** - The sqlite3_int64 output value is set to the number of cycles,
+** according to the processor time-stamp counter, that elapsed while the
+** query element was being processed. This value is not available for
+** all query elements - if it is unavailable the output variable is
+** set to -1.
**
*/
#define SQLITE_SCANSTAT_NLOOP 0
@@ -9203,12 +10576,14 @@ SQLITE_API SQLITE_EXPERIMENTAL const char *sqlite3_vtab_collation(sqlite3_index_
#define SQLITE_SCANSTAT_NAME 3
#define SQLITE_SCANSTAT_EXPLAIN 4
#define SQLITE_SCANSTAT_SELECTID 5
+#define SQLITE_SCANSTAT_PARENTID 6
+#define SQLITE_SCANSTAT_NCYCLE 7
/*
** CAPI3REF: Prepared Statement Scan Status
** METHOD: sqlite3_stmt
**
-** This interface returns information about the predicted and measured
+** These interfaces return information about the predicted and measured
** performance for pStmt. Advanced applications can use this
** interface to compare the predicted and the measured performance and
** issue warnings and/or rerun [ANALYZE] if discrepancies are found.
@@ -9219,19 +10594,25 @@ SQLITE_API SQLITE_EXPERIMENTAL const char *sqlite3_vtab_collation(sqlite3_index_
**
** The "iScanStatusOp" parameter determines which status information to return.
** The "iScanStatusOp" must be one of the [scanstatus options] or the behavior
-** of this interface is undefined.
-** ^The requested measurement is written into a variable pointed to by
-** the "pOut" parameter.
-** Parameter "idx" identifies the specific loop to retrieve statistics for.
-** Loops are numbered starting from zero. ^If idx is out of range - less than
-** zero or greater than or equal to the total number of loops used to implement
-** the statement - a non-zero value is returned and the variable that pOut
-** points to is unchanged.
-**
-** ^Statistics might not be available for all loops in all statements. ^In cases
-** where there exist loops with no available statistics, this function behaves
-** as if the loop did not exist - it returns non-zero and leave the variable
-** that pOut points to unchanged.
+** of this interface is undefined. ^The requested measurement is written into
+** a variable pointed to by the "pOut" parameter.
+**
+** The "flags" parameter must be passed a mask of flags. At present only
+** one flag is defined - SQLITE_SCANSTAT_COMPLEX. If SQLITE_SCANSTAT_COMPLEX
+** is specified, then status information is available for all elements
+** of a query plan that are reported by "EXPLAIN QUERY PLAN" output. If
+** SQLITE_SCANSTAT_COMPLEX is not specified, then only query plan elements
+** that correspond to query loops (the "SCAN..." and "SEARCH..." elements of
+** the EXPLAIN QUERY PLAN output) are available. Invoking API
+** sqlite3_stmt_scanstatus() is equivalent to calling
+** sqlite3_stmt_scanstatus_v2() with a zeroed flags parameter.
+**
+** Parameter "idx" identifies the specific query element to retrieve statistics
+** for. Query elements are numbered starting from zero. A value of -1 may
+** retrieve statistics for the entire query. ^If idx is out of range
+** - less than -1 or greater than or equal to the total number of query
+** elements used to implement the statement - a non-zero value is returned and
+** the variable that pOut points to is unchanged.
**
** See also: [sqlite3_stmt_scanstatus_reset()]
*/
@@ -9240,7 +10621,20 @@ SQLITE_API int sqlite3_stmt_scanstatus(
int idx, /* Index of loop to report on */
int iScanStatusOp, /* Information desired. SQLITE_SCANSTAT_* */
void *pOut /* Result written here */
-);
+);
+SQLITE_API int sqlite3_stmt_scanstatus_v2(
+ sqlite3_stmt *pStmt, /* Prepared statement for which info desired */
+ int idx, /* Index of loop to report on */
+ int iScanStatusOp, /* Information desired. SQLITE_SCANSTAT_* */
+ int flags, /* Mask of flags defined below */
+ void *pOut /* Result written here */
+);
+
+/*
+** CAPI3REF: Prepared Statement Scan Status
+** KEYWORDS: {scan status flags}
+*/
+#define SQLITE_SCANSTAT_COMPLEX 0x0001
/*
** CAPI3REF: Zero Scan-Status Counters
@@ -9255,18 +10649,19 @@ SQLITE_API void sqlite3_stmt_scanstatus_reset(sqlite3_stmt*);
/*
** CAPI3REF: Flush caches to disk mid-transaction
+** METHOD: sqlite3
**
** ^If a write-transaction is open on [database connection] D when the
-** [sqlite3_db_cacheflush(D)] interface invoked, any dirty
-** pages in the pager-cache that are not currently in use are written out
+** [sqlite3_db_cacheflush(D)] interface is invoked, any dirty
+** pages in the pager-cache that are not currently in use are written out
** to disk. A dirty page may be in use if a database cursor created by an
** active SQL statement is reading from it, or if it is page 1 of a database
** file (page 1 is always "in use"). ^The [sqlite3_db_cacheflush(D)]
** interface flushes caches for all schemas - "main", "temp", and
** any [attached] databases.
**
-** ^If this function needs to obtain extra database locks before dirty pages
-** can be flushed to disk, it does so. ^If those locks cannot be obtained
+** ^If this function needs to obtain extra database locks before dirty pages
+** can be flushed to disk, it does so. ^If those locks cannot be obtained
** immediately and there is a busy-handler callback configured, it is invoked
** in the usual manner. ^If the required lock still cannot be obtained, then
** the database is skipped and an attempt made to flush any dirty pages
@@ -9287,6 +10682,7 @@ SQLITE_API int sqlite3_db_cacheflush(sqlite3*);
/*
** CAPI3REF: The pre-update hook.
+** METHOD: sqlite3
**
** ^These interfaces are only available if SQLite is compiled using the
** [SQLITE_ENABLE_PREUPDATE_HOOK] compile-time option.
@@ -9304,7 +10700,7 @@ SQLITE_API int sqlite3_db_cacheflush(sqlite3*);
**
** ^The preupdate hook only fires for changes to real database tables; the
** preupdate hook is not invoked for changes to [virtual tables] or to
-** system tables like sqlite_master or sqlite_stat1.
+** system tables like sqlite_sequence or sqlite_stat1.
**
** ^The second parameter to the preupdate callback is a pointer to
** the [database connection] that registered the preupdate hook.
@@ -9313,21 +10709,25 @@ SQLITE_API int sqlite3_db_cacheflush(sqlite3*);
** kind of update operation that is about to occur.
** ^(The fourth parameter to the preupdate callback is the name of the
** database within the database connection that is being modified. This
-** will be "main" for the main database or "temp" for TEMP tables or
+** will be "main" for the main database or "temp" for TEMP tables or
** the name given after the AS keyword in the [ATTACH] statement for attached
** databases.)^
** ^The fifth parameter to the preupdate callback is the name of the
** table that is being modified.
**
** For an UPDATE or DELETE operation on a [rowid table], the sixth
-** parameter passed to the preupdate callback is the initial [rowid] of the
+** parameter passed to the preupdate callback is the initial [rowid] of the
** row being modified or deleted. For an INSERT operation on a rowid table,
-** or any operation on a WITHOUT ROWID table, the value of the sixth
+** or any operation on a WITHOUT ROWID table, the value of the sixth
** parameter is undefined. For an INSERT or UPDATE on a rowid table the
** seventh parameter is the final rowid value of the row being inserted
** or updated. The value of the seventh parameter passed to the callback
** function is not defined for operations on WITHOUT ROWID tables, or for
-** INSERT operations on rowid tables.
+** DELETE operations on rowid tables.
+**
+** ^The sqlite3_preupdate_hook(D,C,P) function returns the P argument from
+** the previous call on the same [database connection] D, or NULL for
+** the first call on D.
**
** The [sqlite3_preupdate_old()], [sqlite3_preupdate_new()],
** [sqlite3_preupdate_count()], and [sqlite3_preupdate_depth()] interfaces
@@ -9361,10 +10761,19 @@ SQLITE_API int sqlite3_db_cacheflush(sqlite3*);
**
** ^The [sqlite3_preupdate_depth(D)] interface returns 0 if the preupdate
** callback was invoked as a result of a direct insert, update, or delete
-** operation; or 1 for inserts, updates, or deletes invoked by top-level
+** operation; or 1 for inserts, updates, or deletes invoked by top-level
** triggers; or 2 for changes resulting from triggers called by top-level
** triggers; and so forth.
**
+** When the [sqlite3_blob_write()] API is used to update a blob column,
+** the pre-update hook is invoked with SQLITE_DELETE, because
+** the new values are not yet available. In this case, when a
+** callback made with op==SQLITE_DELETE is actually a write using the
+** sqlite3_blob_write() API, the [sqlite3_preupdate_blobwrite()] returns
+** the index of the column being written. In other cases, where the
+** pre-update hook is being invoked for some other reason, including a
+** regular DELETE, sqlite3_preupdate_blobwrite() returns -1.
+**
** See also: [sqlite3_update_hook()]
*/
#if defined(SQLITE_ENABLE_PREUPDATE_HOOK)
@@ -9385,17 +10794,19 @@ SQLITE_API int sqlite3_preupdate_old(sqlite3 *, int, sqlite3_value **);
SQLITE_API int sqlite3_preupdate_count(sqlite3 *);
SQLITE_API int sqlite3_preupdate_depth(sqlite3 *);
SQLITE_API int sqlite3_preupdate_new(sqlite3 *, int, sqlite3_value **);
+SQLITE_API int sqlite3_preupdate_blobwrite(sqlite3 *);
#endif
/*
** CAPI3REF: Low-level system error code
+** METHOD: sqlite3
**
** ^Attempt to return the underlying operating system error code or error
** number that caused the most recent I/O error or failure to open a file.
** The return value is OS-dependent. For example, on unix systems, after
** [sqlite3_open_v2()] returns [SQLITE_CANTOPEN], this interface could be
** called to get back the underlying "errno" that caused the problem, such
-** as ENOSPC, EAUTH, EISDIR, and so forth.
+** as ENOSPC, EAUTH, EISDIR, and so forth.
*/
SQLITE_API int sqlite3_system_errno(sqlite3*);
@@ -9433,12 +10844,20 @@ typedef struct sqlite3_snapshot {
** [sqlite3_snapshot_get(D,S,P)] interface writes a pointer to the newly
** created [sqlite3_snapshot] object into *P and returns SQLITE_OK.
** If there is not already a read-transaction open on schema S when
-** this function is called, one is opened automatically.
+** this function is called, one is opened automatically.
+**
+** If a read-transaction is opened by this function, then it is guaranteed
+** that the returned snapshot object may not be invalidated by a database
+** writer or checkpointer until after the read-transaction is closed. This
+** is not guaranteed if a read-transaction is already open when this
+** function is called. In that case, any subsequent write or checkpoint
+** operation on the database may invalidate the returned snapshot handle,
+** even while the read-transaction remains open.
**
** The following must be true for this function to succeed. If any of
** the following statements are false when sqlite3_snapshot_get() is
** called, SQLITE_ERROR is returned. The final value of *P is undefined
-** in this case.
+** in this case.
**
**
** - The database handle must not be in [autocommit mode].
@@ -9450,13 +10869,13 @@ typedef struct sqlite3_snapshot {
**
**
- One or more transactions must have been written to the current wal
** file since it was created on disk (by any connection). This means
-** that a snapshot cannot be taken on a wal mode database with no wal
+** that a snapshot cannot be taken on a wal mode database with no wal
** file immediately after it is first opened. At least one transaction
** must be written to it first.
**
**
** This function may also return SQLITE_NOMEM. If it is called with the
-** database handle in autocommit mode but fails for some other reason,
+** database handle in autocommit mode but fails for some other reason,
** whether or not a read transaction is opened on schema S is undefined.
**
** The [sqlite3_snapshot] object returned from a successful call to
@@ -9466,7 +10885,7 @@ typedef struct sqlite3_snapshot {
** The [sqlite3_snapshot_get()] interface is only available when the
** [SQLITE_ENABLE_SNAPSHOT] compile-time option is used.
*/
-SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_get(
+SQLITE_API int sqlite3_snapshot_get(
sqlite3 *db,
const char *zSchema,
sqlite3_snapshot **ppSnapshot
@@ -9476,38 +10895,38 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_get(
** CAPI3REF: Start a read transaction on an historical snapshot
** METHOD: sqlite3_snapshot
**
-** ^The [sqlite3_snapshot_open(D,S,P)] interface either starts a new read
-** transaction or upgrades an existing one for schema S of
-** [database connection] D such that the read transaction refers to
-** historical [snapshot] P, rather than the most recent change to the
-** database. ^The [sqlite3_snapshot_open()] interface returns SQLITE_OK
+** ^The [sqlite3_snapshot_open(D,S,P)] interface either starts a new read
+** transaction or upgrades an existing one for schema S of
+** [database connection] D such that the read transaction refers to
+** historical [snapshot] P, rather than the most recent change to the
+** database. ^The [sqlite3_snapshot_open()] interface returns SQLITE_OK
** on success or an appropriate [error code] if it fails.
**
-** ^In order to succeed, the database connection must not be in
+** ^In order to succeed, the database connection must not be in
** [autocommit mode] when [sqlite3_snapshot_open(D,S,P)] is called. If there
** is already a read transaction open on schema S, then the database handle
** must have no active statements (SELECT statements that have been passed
-** to sqlite3_step() but not sqlite3_reset() or sqlite3_finalize()).
+** to sqlite3_step() but not sqlite3_reset() or sqlite3_finalize()).
** SQLITE_ERROR is returned if either of these conditions is violated, or
** if schema S does not exist, or if the snapshot object is invalid.
**
** ^A call to sqlite3_snapshot_open() will fail to open if the specified
-** snapshot has been overwritten by a [checkpoint]. In this case
+** snapshot has been overwritten by a [checkpoint]. In this case
** SQLITE_ERROR_SNAPSHOT is returned.
**
-** If there is already a read transaction open when this function is
+** If there is already a read transaction open when this function is
** invoked, then the same read transaction remains open (on the same
** database snapshot) if SQLITE_ERROR, SQLITE_BUSY or SQLITE_ERROR_SNAPSHOT
** is returned. If another error code - for example SQLITE_PROTOCOL or an
** SQLITE_IOERR error code - is returned, then the final state of the
-** read transaction is undefined. If SQLITE_OK is returned, then the
+** read transaction is undefined. If SQLITE_OK is returned, then the
** read transaction is now open on database snapshot P.
**
** ^(A call to [sqlite3_snapshot_open(D,S,P)] will fail if the
** database connection D does not know that the database file for
** schema S is in [WAL mode]. A database connection might not know
** that the database file is in [WAL mode] if there has been no prior
-** I/O on that database connection, or if the database entered [WAL mode]
+** I/O on that database connection, or if the database entered [WAL mode]
** after the most recent I/O on the database connection.)^
** (Hint: Run "[PRAGMA application_id]" against a newly opened
** database connection in order to make it ready to use snapshots.)
@@ -9515,7 +10934,7 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_get(
** The [sqlite3_snapshot_open()] interface is only available when the
** [SQLITE_ENABLE_SNAPSHOT] compile-time option is used.
*/
-SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_open(
+SQLITE_API int sqlite3_snapshot_open(
sqlite3 *db,
const char *zSchema,
sqlite3_snapshot *pSnapshot
@@ -9532,24 +10951,24 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_open(
** The [sqlite3_snapshot_free()] interface is only available when the
** [SQLITE_ENABLE_SNAPSHOT] compile-time option is used.
*/
-SQLITE_API SQLITE_EXPERIMENTAL void sqlite3_snapshot_free(sqlite3_snapshot*);
+SQLITE_API void sqlite3_snapshot_free(sqlite3_snapshot*);
/*
** CAPI3REF: Compare the ages of two snapshot handles.
** METHOD: sqlite3_snapshot
**
** The sqlite3_snapshot_cmp(P1, P2) interface is used to compare the ages
-** of two valid snapshot handles.
+** of two valid snapshot handles.
**
-** If the two snapshot handles are not associated with the same database
-** file, the result of the comparison is undefined.
+** If the two snapshot handles are not associated with the same database
+** file, the result of the comparison is undefined.
**
** Additionally, the result of the comparison is only valid if both of the
** snapshot handles were obtained by calling sqlite3_snapshot_get() since the
** last time the wal file was deleted. The wal file is deleted when the
** database is changed back to rollback mode or when the number of database
-** clients drops to zero. If either snapshot handle was obtained before the
-** wal file was last deleted, the value returned by this function
+** clients drops to zero. If either snapshot handle was obtained before the
+** wal file was last deleted, the value returned by this function
** is undefined.
**
** Otherwise, this API returns a negative value if P1 refers to an older
@@ -9559,7 +10978,7 @@ SQLITE_API SQLITE_EXPERIMENTAL void sqlite3_snapshot_free(sqlite3_snapshot*);
** This interface is only available if SQLite is compiled with the
** [SQLITE_ENABLE_SNAPSHOT] option.
*/
-SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_cmp(
+SQLITE_API int sqlite3_snapshot_cmp(
sqlite3_snapshot *p1,
sqlite3_snapshot *p2
);
@@ -9587,20 +11006,21 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_cmp(
** This interface is only available if SQLite is compiled with the
** [SQLITE_ENABLE_SNAPSHOT] option.
*/
-SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_recover(sqlite3 *db, const char *zDb);
+SQLITE_API int sqlite3_snapshot_recover(sqlite3 *db, const char *zDb);
/*
** CAPI3REF: Serialize a database
**
-** The sqlite3_serialize(D,S,P,F) interface returns a pointer to memory
-** that is a serialization of the S database on [database connection] D.
+** The sqlite3_serialize(D,S,P,F) interface returns a pointer to
+** memory that is a serialization of the S database on
+** [database connection] D. If S is a NULL pointer, the main database is used.
** If P is not a NULL pointer, then the size of the database in bytes
** is written into *P.
**
** For an ordinary on-disk database file, the serialization is just a
** copy of the disk file. For an in-memory database or a "TEMP" database,
** the serialization is the same sequence of bytes which would be written
-** to disk if that database where backed up to disk.
+** to disk if that database were backed up to disk.
**
** The usual case is that sqlite3_serialize() copies the serialization of
** the database into memory obtained from [sqlite3_malloc64()] and returns
@@ -9609,21 +11029,28 @@ SQLITE_API SQLITE_EXPERIMENTAL int sqlite3_snapshot_recover(sqlite3 *db, const c
** contains the SQLITE_SERIALIZE_NOCOPY bit, then no memory allocations
** are made, and the sqlite3_serialize() function will return a pointer
** to the contiguous memory representation of the database that SQLite
-** is currently using for that database, or NULL if the no such contiguous
+** is currently using for that database, or NULL if no such contiguous
** memory representation of the database exists. A contiguous memory
** representation of the database will usually only exist if there has
** been a prior call to [sqlite3_deserialize(D,S,...)] with the same
** values of D and S.
-** The size of the database is written into *P even if the
+** The size of the database is written into *P even if the
** SQLITE_SERIALIZE_NOCOPY bit is set but no contiguous copy
** of the database exists.
**
+** After the call, if the SQLITE_SERIALIZE_NOCOPY bit had been set,
+** the returned buffer content will remain accessible and unchanged
+** until either the next write operation on the connection or when
+** the connection is closed, and applications must not modify the
+** buffer. If the bit had been clear, the returned buffer will not
+** be accessed by SQLite after the call.
+**
** A call to sqlite3_serialize(D,S,P,F) might return NULL even if the
** SQLITE_SERIALIZE_NOCOPY bit is omitted from argument F if a memory
** allocation error occurs.
**
-** This interface is only available if SQLite is compiled with the
-** [SQLITE_ENABLE_DESERIALIZE] option.
+** This interface is omitted if SQLite is compiled with the
+** [SQLITE_OMIT_DESERIALIZE] option.
*/
SQLITE_API unsigned char *sqlite3_serialize(
sqlite3 *db, /* The database connection */
@@ -9651,14 +11078,15 @@ SQLITE_API unsigned char *sqlite3_serialize(
/*
** CAPI3REF: Deserialize a database
**
-** The sqlite3_deserialize(D,S,P,N,M,F) interface causes the
+** The sqlite3_deserialize(D,S,P,N,M,F) interface causes the
** [database connection] D to disconnect from database S and then
-** reopen S as an in-memory database based on the serialization contained
-** in P. The serialized database P is N bytes in size. M is the size of
-** the buffer P, which might be larger than N. If M is larger than N, and
-** the SQLITE_DESERIALIZE_READONLY bit is not set in F, then SQLite is
-** permitted to add content to the in-memory database as long as the total
-** size does not exceed M bytes.
+** reopen S as an in-memory database based on the serialization
+** contained in P. If S is a NULL pointer, the main database is
+** used. The serialized database P is N bytes in size. M is the size
+** of the buffer P, which might be larger than N. If M is larger than
+** N, and the SQLITE_DESERIALIZE_READONLY bit is not set in F, then
+** SQLite is permitted to add content to the in-memory database as
+** long as the total size does not exceed M bytes.
**
** If the SQLITE_DESERIALIZE_FREEONCLOSE bit is set in F, then SQLite will
** invoke sqlite3_free() on the serialization buffer when the database
@@ -9666,22 +11094,36 @@ SQLITE_API unsigned char *sqlite3_serialize(
** SQLite will try to increase the buffer size using sqlite3_realloc64()
** if writes on the database cause it to grow larger than M bytes.
**
+** Applications must not modify the buffer P or invalidate it before
+** the database connection D is closed.
+**
** The sqlite3_deserialize() interface will fail with SQLITE_BUSY if the
** database is currently in a read transaction or is involved in a backup
** operation.
**
-** If sqlite3_deserialize(D,S,P,N,M,F) fails for any reason and if the
+** It is not possible to deserialize into the TEMP database. If the
+** S argument to sqlite3_deserialize(D,S,P,N,M,F) is "temp" then the
+** function returns SQLITE_ERROR.
+**
+** The deserialized database should not be in [WAL mode]. If the database
+** is in WAL mode, then any attempt to use the database file will result
+** in an [SQLITE_CANTOPEN] error. The application can set the
+** [file format version numbers] (bytes 18 and 19) of the input database P
+** to 0x01 prior to invoking sqlite3_deserialize(D,S,P,N,M,F) to force the
+** database file into rollback mode and work around this limitation.
+**
+** If sqlite3_deserialize(D,S,P,N,M,F) fails for any reason and if the
** SQLITE_DESERIALIZE_FREEONCLOSE bit is set in argument F, then
** [sqlite3_free()] is invoked on argument P prior to returning.
**
-** This interface is only available if SQLite is compiled with the
-** [SQLITE_ENABLE_DESERIALIZE] option.
+** This interface is omitted if SQLite is compiled with the
+** [SQLITE_OMIT_DESERIALIZE] option.
*/
SQLITE_API int sqlite3_deserialize(
sqlite3 *db, /* The database connection */
const char *zSchema, /* Which DB to reopen with the deserialization */
unsigned char *pData, /* The serialized database content */
- sqlite3_int64 szDb, /* Number bytes in the deserialization */
+ sqlite3_int64 szDb, /* Number of bytes in the deserialization */
sqlite3_int64 szBuf, /* Total size of buffer pData[] */
unsigned mFlags /* Zero or more SQLITE_DESERIALIZE_* flags */
);
@@ -9689,7 +11131,7 @@ SQLITE_API int sqlite3_deserialize(
/*
** CAPI3REF: Flags for sqlite3_deserialize()
**
-** The following are allowed values for 6th argument (the F argument) to
+** The following are allowed values for the 6th argument (the F argument) to
** the [sqlite3_deserialize(D,S,P,N,M,F)] interface.
**
** The SQLITE_DESERIALIZE_FREEONCLOSE means that the database serialization
@@ -9711,6 +11153,54 @@ SQLITE_API int sqlite3_deserialize(
#define SQLITE_DESERIALIZE_RESIZEABLE 2 /* Resize using sqlite3_realloc64() */
#define SQLITE_DESERIALIZE_READONLY 4 /* Database is read-only */
+/*
+** CAPI3REF: Bind array values to the CARRAY table-valued function
+**
+** The sqlite3_carray_bind(S,I,P,N,F,X) interface binds an array value to
+** one of the first argument of the [carray() table-valued function]. The
+** S parameter is a pointer to the [prepared statement] that uses the carray()
+** functions. I is the parameter index to be bound. P is a pointer to the
+** array to be bound, and N is the number of eements in the array. The
+** F argument is one of constants [SQLITE_CARRAY_INT32], [SQLITE_CARRAY_INT64],
+** [SQLITE_CARRAY_DOUBLE], [SQLITE_CARRAY_TEXT], or [SQLITE_CARRAY_BLOB] to
+** indicate the datatype of the array being bound. The X argument is not a
+** NULL pointer, then SQLite will invoke the function X on the P parameter
+** after it has finished using P, even if the call to
+** sqlite3_carray_bind() fails. The special-case finalizer
+** SQLITE_TRANSIENT has no effect here.
+*/
+SQLITE_API int sqlite3_carray_bind(
+ sqlite3_stmt *pStmt, /* Statement to be bound */
+ int i, /* Parameter index */
+ void *aData, /* Pointer to array data */
+ int nData, /* Number of data elements */
+ int mFlags, /* CARRAY flags */
+ void (*xDel)(void*) /* Destructor for aData */
+);
+
+/*
+** CAPI3REF: Datatypes for the CARRAY table-valued function
+**
+** The fifth argument to the [sqlite3_carray_bind()] interface musts be
+** one of the following constants, to specify the datatype of the array
+** that is being bound into the [carray table-valued function].
+*/
+#define SQLITE_CARRAY_INT32 0 /* Data is 32-bit signed integers */
+#define SQLITE_CARRAY_INT64 1 /* Data is 64-bit signed integers */
+#define SQLITE_CARRAY_DOUBLE 2 /* Data is doubles */
+#define SQLITE_CARRAY_TEXT 3 /* Data is char* */
+#define SQLITE_CARRAY_BLOB 4 /* Data is struct iovec */
+
+/*
+** Versions of the above #defines that omit the initial SQLITE_, for
+** legacy compatibility.
+*/
+#define CARRAY_INT32 0 /* Data is 32-bit signed integers */
+#define CARRAY_INT64 1 /* Data is 64-bit signed integers */
+#define CARRAY_DOUBLE 2 /* Data is doubles */
+#define CARRAY_TEXT 3 /* Data is char* */
+#define CARRAY_BLOB 4 /* Data is struct iovec */
+
/*
** Undo the hack that converts floating point types to integer for
** builds on processors without floating point support.
@@ -9719,10 +11209,21 @@ SQLITE_API int sqlite3_deserialize(
# undef double
#endif
+#if defined(__wasi__)
+# undef SQLITE_WASI
+# define SQLITE_WASI 1
+# ifndef SQLITE_OMIT_LOAD_EXTENSION
+# define SQLITE_OMIT_LOAD_EXTENSION
+# endif
+# ifndef SQLITE_THREADSAFE
+# define SQLITE_THREADSAFE 0
+# endif
+#endif
+
#ifdef __cplusplus
} /* End of the 'extern "C"' block */
#endif
-#endif /* SQLITE3_H */
+/* #endif for SQLITE3_H will be added by mksqlite3.tcl */
/******** Begin file sqlite3rtree.h *********/
/*
@@ -9785,7 +11286,7 @@ struct sqlite3_rtree_geometry {
};
/*
-** Register a 2nd-generation geometry callback named zScore that can be
+** Register a 2nd-generation geometry callback named zScore that can be
** used as part of an R-Tree geometry query as follows:
**
** SELECT ... FROM WHERE MATCH $zQueryFunc(... params ...)
@@ -9800,7 +11301,7 @@ SQLITE_API int sqlite3_rtree_query_callback(
/*
-** A pointer to a structure of the following type is passed as the
+** A pointer to a structure of the following type is passed as the
** argument to scored geometry callback registered using
** sqlite3_rtree_query_callback().
**
@@ -9895,7 +11396,7 @@ typedef struct sqlite3_changeset_iter sqlite3_changeset_iter;
** is not possible for an application to register a pre-update hook on a
** database handle that has one or more session objects attached. Nor is
** it possible to create a session object attached to a database handle for
-** which a pre-update hook is already defined. The results of attempting
+** which a pre-update hook is already defined. The results of attempting
** either of these things are undefined.
**
** The session object will be used to create changesets for tables in
@@ -9913,17 +11414,62 @@ SQLITE_API int sqlite3session_create(
** CAPI3REF: Delete A Session Object
** DESTRUCTOR: sqlite3_session
**
-** Delete a session object previously allocated using
+** Delete a session object previously allocated using
** [sqlite3session_create()]. Once a session object has been deleted, the
** results of attempting to use pSession with any other session module
** function are undefined.
**
** Session objects must be deleted before the database handle to which they
-** are attached is closed. Refer to the documentation for
+** are attached is closed. Refer to the documentation for
** [sqlite3session_create()] for details.
*/
SQLITE_API void sqlite3session_delete(sqlite3_session *pSession);
+/*
+** CAPI3REF: Configure a Session Object
+** METHOD: sqlite3_session
+**
+** This method is used to configure a session object after it has been
+** created. At present the only valid values for the second parameter are
+** [SQLITE_SESSION_OBJCONFIG_SIZE] and [SQLITE_SESSION_OBJCONFIG_ROWID].
+**
+*/
+SQLITE_API int sqlite3session_object_config(sqlite3_session*, int op, void *pArg);
+
+/*
+** CAPI3REF: Options for sqlite3session_object_config
+**
+** The following values may passed as the the 2nd parameter to
+** sqlite3session_object_config().
+**
+** SQLITE_SESSION_OBJCONFIG_SIZE
+** This option is used to set, clear or query the flag that enables
+** the [sqlite3session_changeset_size()] API. Because it imposes some
+** computational overhead, this API is disabled by default. Argument
+** pArg must point to a value of type (int). If the value is initially
+** 0, then the sqlite3session_changeset_size() API is disabled. If it
+** is greater than 0, then the same API is enabled. Or, if the initial
+** value is less than zero, no change is made. In all cases the (int)
+** variable is set to 1 if the sqlite3session_changeset_size() API is
+** enabled following the current call, or 0 otherwise.
+**
+** It is an error (SQLITE_MISUSE) to attempt to modify this setting after
+** the first table has been attached to the session object.
+**
+** SQLITE_SESSION_OBJCONFIG_ROWID
+** This option is used to set, clear or query the flag that enables
+** collection of data for tables with no explicit PRIMARY KEY.
+**
+** Normally, tables with no explicit PRIMARY KEY are simply ignored
+** by the sessions module. However, if this flag is set, it behaves
+** as if such tables have a column "_rowid_ INTEGER PRIMARY KEY" inserted
+** as their leftmost columns.
+**
+** It is an error (SQLITE_MISUSE) to attempt to modify this setting after
+** the first table has been attached to the session object.
+*/
+#define SQLITE_SESSION_OBJCONFIG_SIZE 1
+#define SQLITE_SESSION_OBJCONFIG_ROWID 2
/*
** CAPI3REF: Enable Or Disable A Session Object
@@ -9937,10 +11483,10 @@ SQLITE_API void sqlite3session_delete(sqlite3_session *pSession);
** the eventual changesets.
**
** Passing zero to this function disables the session. Passing a value
-** greater than zero enables it. Passing a value less than zero is a
+** greater than zero enables it. Passing a value less than zero is a
** no-op, and may be used to query the current state of the session.
**
-** The return value indicates the final state of the session object: 0 if
+** The return value indicates the final state of the session object: 0 if
** the session is disabled, or 1 if it is enabled.
*/
SQLITE_API int sqlite3session_enable(sqlite3_session *pSession, int bEnable);
@@ -9955,7 +11501,7 @@ SQLITE_API int sqlite3session_enable(sqlite3_session *pSession, int bEnable);
**
** - The session object "indirect" flag is set when the change is
** made, or
-**
- The change is made by an SQL trigger or foreign key action
+**
- The change is made by an SQL trigger or foreign key action
** instead of directly as a result of a users SQL statement.
**
**
@@ -9967,10 +11513,10 @@ SQLITE_API int sqlite3session_enable(sqlite3_session *pSession, int bEnable);
** flag. If the second argument passed to this function is zero, then the
** indirect flag is cleared. If it is greater than zero, the indirect flag
** is set. Passing a value less than zero does not modify the current value
-** of the indirect flag, and may be used to query the current state of the
+** of the indirect flag, and may be used to query the current state of the
** indirect flag for the specified session object.
**
-** The return value indicates the final state of the indirect flag: 0 if
+** The return value indicates the final state of the indirect flag: 0 if
** it is clear, or 1 if it is set.
*/
SQLITE_API int sqlite3session_indirect(sqlite3_session *pSession, int bIndirect);
@@ -9980,20 +11526,20 @@ SQLITE_API int sqlite3session_indirect(sqlite3_session *pSession, int bIndirect)
** METHOD: sqlite3_session
**
** If argument zTab is not NULL, then it is the name of a table to attach
-** to the session object passed as the first argument. All subsequent changes
-** made to the table while the session object is enabled will be recorded. See
+** to the session object passed as the first argument. All subsequent changes
+** made to the table while the session object is enabled will be recorded. See
** documentation for [sqlite3session_changeset()] for further details.
**
** Or, if argument zTab is NULL, then changes are recorded for all tables
-** in the database. If additional tables are added to the database (by
-** executing "CREATE TABLE" statements) after this call is made, changes for
+** in the database. If additional tables are added to the database (by
+** executing "CREATE TABLE" statements) after this call is made, changes for
** the new tables are also recorded.
**
** Changes can only be recorded for tables that have a PRIMARY KEY explicitly
-** defined as part of their CREATE TABLE statement. It does not matter if the
+** defined as part of their CREATE TABLE statement. It does not matter if the
** PRIMARY KEY is an "INTEGER PRIMARY KEY" (rowid alias) or not. The PRIMARY
** KEY may consist of a single column, or may be a composite key.
-**
+**
** It is not an error if the named table does not exist in the database. Nor
** is it an error if the named table does not have a PRIMARY KEY. However,
** no changes will be recorded in either of these scenarios.
@@ -10001,29 +11547,29 @@ SQLITE_API int sqlite3session_indirect(sqlite3_session *pSession, int bIndirect)
** Changes are not recorded for individual rows that have NULL values stored
** in one or more of their PRIMARY KEY columns.
**
-** SQLITE_OK is returned if the call completes without error. Or, if an error
+** SQLITE_OK is returned if the call completes without error. Or, if an error
** occurs, an SQLite error code (e.g. SQLITE_NOMEM) is returned.
**
** Special sqlite_stat1 Handling
**
-** As of SQLite version 3.22.0, the "sqlite_stat1" table is an exception to
+** As of SQLite version 3.22.0, the "sqlite_stat1" table is an exception to
** some of the rules above. In SQLite, the schema of sqlite_stat1 is:
**
-** CREATE TABLE sqlite_stat1(tbl,idx,stat)
+** CREATE TABLE sqlite_stat1(tbl,idx,stat)
**
**
-** Even though sqlite_stat1 does not have a PRIMARY KEY, changes are
-** recorded for it as if the PRIMARY KEY is (tbl,idx). Additionally, changes
+** Even though sqlite_stat1 does not have a PRIMARY KEY, changes are
+** recorded for it as if the PRIMARY KEY is (tbl,idx). Additionally, changes
** are recorded for rows for which (idx IS NULL) is true. However, for such
** rows a zero-length blob (SQL value X'') is stored in the changeset or
** patchset instead of a NULL value. This allows such changesets to be
** manipulated by legacy implementations of sqlite3changeset_invert(),
** concat() and similar.
**
-** The sqlite3changeset_apply() function automatically converts the
+** The sqlite3changeset_apply() function automatically converts the
** zero-length blob back to a NULL value when updating the sqlite_stat1
** table. However, if the application calls sqlite3changeset_new(),
-** sqlite3changeset_old() or sqlite3changeset_conflict on a changeset
+** sqlite3changeset_old() or sqlite3changeset_conflict on a changeset
** iterator directly (including on a changeset iterator passed to a
** conflict-handler callback) then the X'' value is returned. The application
** must translate X'' to NULL itself if required.
@@ -10042,10 +11588,10 @@ SQLITE_API int sqlite3session_attach(
** CAPI3REF: Set a table filter on a Session Object.
** METHOD: sqlite3_session
**
-** The second argument (xFilter) is the "filter callback". For changes to rows
+** The second argument (xFilter) is the "filter callback". For changes to rows
** in tables that are not attached to the Session object, the filter is called
-** to determine whether changes to the table's rows should be tracked or not.
-** If xFilter returns 0, changes are not tracked. Note that once a table is
+** to determine whether changes to the table's rows should be tracked or not.
+** If xFilter returns 0, changes are not tracked. Note that once a table is
** attached, xFilter will not be called again.
*/
SQLITE_API void sqlite3session_table_filter(
@@ -10061,9 +11607,9 @@ SQLITE_API void sqlite3session_table_filter(
** CAPI3REF: Generate A Changeset From A Session Object
** METHOD: sqlite3_session
**
-** Obtain a changeset containing changes to the tables attached to the
-** session object passed as the first argument. If successful,
-** set *ppChangeset to point to a buffer containing the changeset
+** Obtain a changeset containing changes to the tables attached to the
+** session object passed as the first argument. If successful,
+** set *ppChangeset to point to a buffer containing the changeset
** and *pnChangeset to the size of the changeset in bytes before returning
** SQLITE_OK. If an error occurs, set both *ppChangeset and *pnChangeset to
** zero and return an SQLite error code.
@@ -10078,7 +11624,7 @@ SQLITE_API void sqlite3session_table_filter(
** modifies the values of primary key columns. If such a change is made, it
** is represented in a changeset as a DELETE followed by an INSERT.
**
-** Changes are not recorded for rows that have NULL values stored in one or
+** Changes are not recorded for rows that have NULL values stored in one or
** more of their PRIMARY KEY columns. If such a row is inserted or deleted,
** no corresponding change is present in the changesets returned by this
** function. If an existing row with one or more NULL values stored in
@@ -10131,14 +11677,14 @@ SQLITE_API void sqlite3session_table_filter(
**
** - For each record generated by an insert, the database is queried
** for a row with a matching primary key. If one is found, an INSERT
-** change is added to the changeset. If no such row is found, no change
+** change is added to the changeset. If no such row is found, no change
** is added to the changeset.
**
-**
- For each record generated by an update or delete, the database is
+**
- For each record generated by an update or delete, the database is
** queried for a row with a matching primary key. If such a row is
** found and one or more of the non-primary key fields have been
-** modified from their original values, an UPDATE change is added to
-** the changeset. Or, if no such row is found in the table, a DELETE
+** modified from their original values, an UPDATE change is added to
+** the changeset. Or, if no such row is found in the table, a DELETE
** change is added to the changeset. If there is a row with a matching
** primary key in the database, but all fields contain their original
** values, no change is added to the changeset.
@@ -10146,7 +11692,7 @@ SQLITE_API void sqlite3session_table_filter(
**
** This means, amongst other things, that if a row is inserted and then later
** deleted while a session object is active, neither the insert nor the delete
-** will be present in the changeset. Or if a row is deleted and then later a
+** will be present in the changeset. Or if a row is deleted and then later a
** row with the same primary key values inserted while a session object is
** active, the resulting changeset will contain an UPDATE change instead of
** a DELETE and an INSERT.
@@ -10155,12 +11701,13 @@ SQLITE_API void sqlite3session_table_filter(
** it does not accumulate records when rows are inserted, updated or deleted.
** This may appear to have some counter-intuitive effects if a single row
** is written to more than once during a session. For example, if a row
-** is inserted while a session object is enabled, then later deleted while
+** is inserted while a session object is enabled, then later deleted while
** the same session object is disabled, no INSERT record will appear in the
** changeset, even though the delete took place while the session was disabled.
-** Or, if one field of a row is updated while a session is disabled, and
-** another field of the same row is updated while the session is enabled, the
-** resulting changeset will contain an UPDATE change that updates both fields.
+** Or, if one field of a row is updated while a session is enabled, and
+** then another field of the same row is updated while the session is disabled,
+** the resulting changeset will contain an UPDATE change that updates both
+** fields.
*/
SQLITE_API int sqlite3session_changeset(
sqlite3_session *pSession, /* Session object */
@@ -10168,6 +11715,22 @@ SQLITE_API int sqlite3session_changeset(
void **ppChangeset /* OUT: Buffer containing changeset */
);
+/*
+** CAPI3REF: Return An Upper-limit For The Size Of The Changeset
+** METHOD: sqlite3_session
+**
+** By default, this function always returns 0. For it to return
+** a useful result, the sqlite3_session object must have been configured
+** to enable this API using sqlite3session_object_config() with the
+** SQLITE_SESSION_OBJCONFIG_SIZE verb.
+**
+** When enabled, this function returns an upper limit, in bytes, for the size
+** of the changeset that might be produced if sqlite3session_changeset() were
+** called. The final changeset size might be equal to or smaller than the
+** size in bytes returned by this function.
+*/
+SQLITE_API sqlite3_int64 sqlite3session_changeset_size(sqlite3_session *pSession);
+
/*
** CAPI3REF: Load The Difference Between Tables Into A Session
** METHOD: sqlite3_session
@@ -10179,7 +11742,7 @@ SQLITE_API int sqlite3session_changeset(
** an error).
**
** Argument zFromDb must be the name of a database ("main", "temp" etc.)
-** attached to the same database handle as the session object that contains
+** attached to the same database handle as the session object that contains
** a table compatible with the table attached to the session by this function.
** A table is considered compatible if it:
**
@@ -10195,33 +11758,34 @@ SQLITE_API int sqlite3session_changeset(
** APIs, tables without PRIMARY KEYs are simply ignored.
**
** This function adds a set of changes to the session object that could be
-** used to update the table in database zFrom (call this the "from-table")
-** so that its content is the same as the table attached to the session
+** used to update the table in database zFrom (call this the "from-table")
+** so that its content is the same as the table attached to the session
** object (call this the "to-table"). Specifically:
**
**
-** - For each row (primary key) that exists in the to-table but not in
+**
- For each row (primary key) that exists in the to-table but not in
** the from-table, an INSERT record is added to the session object.
**
-**
- For each row (primary key) that exists in the to-table but not in
+**
- For each row (primary key) that exists in the to-table but not in
** the from-table, a DELETE record is added to the session object.
**
-**
- For each row (primary key) that exists in both tables, but features
+**
- For each row (primary key) that exists in both tables, but features
** different non-PK values in each, an UPDATE record is added to the
-** session.
+** session.
**
**
** To clarify, if this function is called and then a changeset constructed
-** using [sqlite3session_changeset()], then after applying that changeset to
-** database zFrom the contents of the two compatible tables would be
+** using [sqlite3session_changeset()], then after applying that changeset to
+** database zFrom the contents of the two compatible tables would be
** identical.
**
-** It an error if database zFrom does not exist or does not contain the
-** required compatible table.
+** Unless the call to this function is a no-op as described above, it is an
+** error if database zFrom does not exist or does not contain the required
+** compatible table.
**
** If the operation is successful, SQLITE_OK is returned. Otherwise, an SQLite
** error code. In this case, if argument pzErrMsg is not NULL, *pzErrMsg
-** may be set to point to a buffer containing an English language error
+** may be set to point to a buffer containing an English language error
** message. It is the responsibility of the caller to free this buffer using
** sqlite3_free().
*/
@@ -10240,19 +11804,19 @@ SQLITE_API int sqlite3session_diff(
** The differences between a patchset and a changeset are that:
**
**
-** - DELETE records consist of the primary key fields only. The
+**
- DELETE records consist of the primary key fields only. The
** original values of other fields are omitted.
-**
- The original values of any modified fields are omitted from
+**
- The original values of any modified fields are omitted from
** UPDATE records.
**
**
-** A patchset blob may be used with up to date versions of all
-** sqlite3changeset_xxx API functions except for sqlite3changeset_invert(),
+** A patchset blob may be used with up to date versions of all
+** sqlite3changeset_xxx API functions except for sqlite3changeset_invert(),
** which returns SQLITE_CORRUPT if it is passed a patchset. Similarly,
** attempting to use a patchset blob with old versions of the
-** sqlite3changeset_xxx APIs also provokes an SQLITE_CORRUPT error.
+** sqlite3changeset_xxx APIs also provokes an SQLITE_CORRUPT error.
**
-** Because the non-primary key "old.*" fields are omitted, no
+** Because the non-primary key "old.*" fields are omitted, no
** SQLITE_CHANGESET_DATA conflicts can be detected or reported if a patchset
** is passed to the sqlite3changeset_apply() API. Other conflict types work
** in the same way as for changesets.
@@ -10271,22 +11835,30 @@ SQLITE_API int sqlite3session_patchset(
/*
** CAPI3REF: Test if a changeset has recorded any changes.
**
-** Return non-zero if no changes to attached tables have been recorded by
-** the session object passed as the first argument. Otherwise, if one or
+** Return non-zero if no changes to attached tables have been recorded by
+** the session object passed as the first argument. Otherwise, if one or
** more changes have been recorded, return zero.
**
** Even if this function returns zero, it is possible that calling
** [sqlite3session_changeset()] on the session handle may still return a
-** changeset that contains no changes. This can happen when a row in
-** an attached table is modified and then later on the original values
+** changeset that contains no changes. This can happen when a row in
+** an attached table is modified and then later on the original values
** are restored. However, if this function returns non-zero, then it is
-** guaranteed that a call to sqlite3session_changeset() will return a
+** guaranteed that a call to sqlite3session_changeset() will return a
** changeset containing zero changes.
*/
SQLITE_API int sqlite3session_isempty(sqlite3_session *pSession);
/*
-** CAPI3REF: Create An Iterator To Traverse A Changeset
+** CAPI3REF: Query for the amount of heap memory used by a session object.
+**
+** This API returns the total amount of heap memory in bytes currently
+** used by the session object passed as the only argument.
+*/
+SQLITE_API sqlite3_int64 sqlite3session_memory_used(sqlite3_session *pSession);
+
+/*
+** CAPI3REF: Create An Iterator To Traverse A Changeset
** CONSTRUCTOR: sqlite3_changeset_iter
**
** Create an iterator used to iterate through the contents of a changeset.
@@ -10294,7 +11866,7 @@ SQLITE_API int sqlite3session_isempty(sqlite3_session *pSession);
** is returned. Otherwise, if an error occurs, *pp is set to zero and an
** SQLite error code is returned.
**
-** The following functions can be used to advance and query a changeset
+** The following functions can be used to advance and query a changeset
** iterator created by this function:
**
**
@@ -10311,12 +11883,12 @@ SQLITE_API int sqlite3session_isempty(sqlite3_session *pSession);
**
** Assuming the changeset blob was created by one of the
** [sqlite3session_changeset()], [sqlite3changeset_concat()] or
-** [sqlite3changeset_invert()] functions, all changes within the changeset
-** that apply to a single table are grouped together. This means that when
-** an application iterates through a changeset using an iterator created by
-** this function, all changes that relate to a single table are visited
-** consecutively. There is no chance that the iterator will visit a change
-** the applies to table X, then one for table Y, and then later on visit
+** [sqlite3changeset_invert()] functions, all changes within the changeset
+** that apply to a single table are grouped together. This means that when
+** an application iterates through a changeset using an iterator created by
+** this function, all changes that relate to a single table are visited
+** consecutively. There is no chance that the iterator will visit a change
+** the applies to table X, then one for table Y, and then later on visit
** another change for table X.
**
** The behavior of sqlite3changeset_start_v2() and its streaming equivalent
@@ -10344,7 +11916,7 @@ SQLITE_API int sqlite3changeset_start_v2(
** The following flags may passed via the 4th parameter to
** [sqlite3changeset_start_v2] and [sqlite3changeset_start_v2_strm]:
**
-** - SQLITE_CHANGESETAPPLY_INVERT
-
+**
- SQLITE_CHANGESETSTART_INVERT
-
** Invert the changeset while iterating through it. This is equivalent to
** inverting a changeset using sqlite3changeset_invert() before applying it.
** It is an error to specify this flag with a patchset.
@@ -10367,12 +11939,12 @@ SQLITE_API int sqlite3changeset_start_v2(
** point to the first change in the changeset. Each subsequent call advances
** the iterator to point to the next change in the changeset (if any). If
** no error occurs and the iterator points to a valid change after a call
-** to sqlite3changeset_next() has advanced it, SQLITE_ROW is returned.
+** to sqlite3changeset_next() has advanced it, SQLITE_ROW is returned.
** Otherwise, if all changes in the changeset have already been visited,
** SQLITE_DONE is returned.
**
-** If an error occurs, an SQLite error code is returned. Possible error
-** codes include SQLITE_CORRUPT (if the changeset buffer is corrupt) or
+** If an error occurs, an SQLite error code is returned. Possible error
+** codes include SQLITE_CORRUPT (if the changeset buffer is corrupt) or
** SQLITE_NOMEM.
*/
SQLITE_API int sqlite3changeset_next(sqlite3_changeset_iter *pIter);
@@ -10387,18 +11959,23 @@ SQLITE_API int sqlite3changeset_next(sqlite3_changeset_iter *pIter);
** call to [sqlite3changeset_next()] must have returned [SQLITE_ROW]. If this
** is not the case, this function returns [SQLITE_MISUSE].
**
-** If argument pzTab is not NULL, then *pzTab is set to point to a
-** nul-terminated utf-8 encoded string containing the name of the table
-** affected by the current change. The buffer remains valid until either
-** sqlite3changeset_next() is called on the iterator or until the
-** conflict-handler function returns. If pnCol is not NULL, then *pnCol is
-** set to the number of columns in the table affected by the change. If
-** pbIndirect is not NULL, then *pbIndirect is set to true (1) if the change
+** Arguments pOp, pnCol and pzTab may not be NULL. Upon return, three
+** outputs are set through these pointers:
+**
+** *pOp is set to one of [SQLITE_INSERT], [SQLITE_DELETE] or [SQLITE_UPDATE],
+** depending on the type of change that the iterator currently points to;
+**
+** *pnCol is set to the number of columns in the table affected by the change; and
+**
+** *pzTab is set to point to a nul-terminated utf-8 encoded string containing
+** the name of the table affected by the current change. The buffer remains
+** valid until either sqlite3changeset_next() is called on the iterator
+** or until the conflict-handler function returns.
+**
+** If pbIndirect is not NULL, then *pbIndirect is set to true (1) if the change
** is an indirect change, or false (0) otherwise. See the documentation for
** [sqlite3session_indirect()] for a description of direct and indirect
-** changes. Finally, if pOp is not NULL, then *pOp is set to one of
-** [SQLITE_INSERT], [SQLITE_DELETE] or [SQLITE_UPDATE], depending on the
-** type of change that the iterator currently points to.
+** changes.
**
** If no error occurs, SQLITE_OK is returned. If an error does occur, an
** SQLite error code is returned. The values of the output variables may not
@@ -10451,7 +12028,7 @@ SQLITE_API int sqlite3changeset_pk(
** The pIter argument passed to this function may either be an iterator
** passed to a conflict-handler by [sqlite3changeset_apply()], or an iterator
** created by [sqlite3changeset_start()]. In the latter case, the most recent
-** call to [sqlite3changeset_next()] must have returned SQLITE_ROW.
+** call to [sqlite3changeset_next()] must have returned SQLITE_ROW.
** Furthermore, it may only be called if the type of change that the iterator
** currently points to is either [SQLITE_DELETE] or [SQLITE_UPDATE]. Otherwise,
** this function returns [SQLITE_MISUSE] and sets *ppValue to NULL.
@@ -10461,9 +12038,9 @@ SQLITE_API int sqlite3changeset_pk(
** [SQLITE_RANGE] is returned and *ppValue is set to NULL.
**
** If successful, this function sets *ppValue to point to a protected
-** sqlite3_value object containing the iVal'th value from the vector of
+** sqlite3_value object containing the iVal'th value from the vector of
** original row values stored as part of the UPDATE or DELETE change and
-** returns SQLITE_OK. The name of the function comes from the fact that this
+** returns SQLITE_OK. The name of the function comes from the fact that this
** is similar to the "old.*" columns available to update or delete triggers.
**
** If some other error occurs (e.g. an OOM condition), an SQLite error code
@@ -10482,7 +12059,7 @@ SQLITE_API int sqlite3changeset_old(
** The pIter argument passed to this function may either be an iterator
** passed to a conflict-handler by [sqlite3changeset_apply()], or an iterator
** created by [sqlite3changeset_start()]. In the latter case, the most recent
-** call to [sqlite3changeset_next()] must have returned SQLITE_ROW.
+** call to [sqlite3changeset_next()] must have returned SQLITE_ROW.
** Furthermore, it may only be called if the type of change that the iterator
** currently points to is either [SQLITE_UPDATE] or [SQLITE_INSERT]. Otherwise,
** this function returns [SQLITE_MISUSE] and sets *ppValue to NULL.
@@ -10492,12 +12069,12 @@ SQLITE_API int sqlite3changeset_old(
** [SQLITE_RANGE] is returned and *ppValue is set to NULL.
**
** If successful, this function sets *ppValue to point to a protected
-** sqlite3_value object containing the iVal'th value from the vector of
+** sqlite3_value object containing the iVal'th value from the vector of
** new row values stored as part of the UPDATE or INSERT change and
** returns SQLITE_OK. If the change is an UPDATE and does not include
-** a new value for the requested column, *ppValue is set to NULL and
-** SQLITE_OK returned. The name of the function comes from the fact that
-** this is similar to the "new.*" columns available to update or delete
+** a new value for the requested column, *ppValue is set to NULL and
+** SQLITE_OK returned. The name of the function comes from the fact that
+** this is similar to the "new.*" columns available to update or delete
** triggers.
**
** If some other error occurs (e.g. an OOM condition), an SQLite error code
@@ -10524,7 +12101,7 @@ SQLITE_API int sqlite3changeset_new(
** [SQLITE_RANGE] is returned and *ppValue is set to NULL.
**
** If successful, this function sets *ppValue to point to a protected
-** sqlite3_value object containing the iVal'th value from the
+** sqlite3_value object containing the iVal'th value from the
** "conflicting row" associated with the current conflict-handler callback
** and returns SQLITE_OK.
**
@@ -10568,7 +12145,7 @@ SQLITE_API int sqlite3changeset_fk_conflicts(
** call has no effect.
**
** If an error was encountered within a call to an sqlite3changeset_xxx()
-** function (for example an [SQLITE_CORRUPT] in [sqlite3changeset_next()] or an
+** function (for example an [SQLITE_CORRUPT] in [sqlite3changeset_next()] or an
** [SQLITE_NOMEM] in [sqlite3changeset_new()]) then an error code corresponding
** to that error is returned by this function. Otherwise, SQLITE_OK is
** returned. This is to allow the following pattern (pseudo-code):
@@ -10580,7 +12157,7 @@ SQLITE_API int sqlite3changeset_fk_conflicts(
** }
** rc = sqlite3changeset_finalize();
** if( rc!=SQLITE_OK ){
-** // An error has occurred
+** // An error has occurred
** }
**
*/
@@ -10608,7 +12185,7 @@ SQLITE_API int sqlite3changeset_finalize(sqlite3_changeset_iter *pIter);
** zeroed and an SQLite error code returned.
**
** It is the responsibility of the caller to eventually call sqlite3_free()
-** on the *ppOut pointer to free the buffer allocation following a successful
+** on the *ppOut pointer to free the buffer allocation following a successful
** call to this function.
**
** WARNING/TODO: This function currently assumes that the input is a valid
@@ -10622,11 +12199,11 @@ SQLITE_API int sqlite3changeset_invert(
/*
** CAPI3REF: Concatenate Two Changeset Objects
**
-** This function is used to concatenate two changesets, A and B, into a
+** This function is used to concatenate two changesets, A and B, into a
** single changeset. The result is a changeset equivalent to applying
-** changeset A followed by changeset B.
+** changeset A followed by changeset B.
**
-** This function combines the two input changesets using an
+** This function combines the two input changesets using an
** sqlite3_changegroup object. Calling it produces similar results as the
** following code fragment:
**
@@ -10654,11 +12231,10 @@ SQLITE_API int sqlite3changeset_concat(
void **ppOut /* OUT: Buffer containing output changeset */
);
-
/*
** CAPI3REF: Changegroup Handle
**
-** A changegroup is an object used to combine two or more
+** A changegroup is an object used to combine two or more
** [changesets] or [patchsets]
*/
typedef struct sqlite3_changegroup sqlite3_changegroup;
@@ -10674,7 +12250,7 @@ typedef struct sqlite3_changegroup sqlite3_changegroup;
**
** If successful, this function returns SQLITE_OK and populates (*pp) with
** a pointer to a new sqlite3_changegroup object before returning. The caller
-** should eventually free the returned object using a call to
+** should eventually free the returned object using a call to
** sqlite3changegroup_delete(). If an error occurs, an SQLite error code
** (i.e. SQLITE_NOMEM) is returned and *pp is set to NULL.
**
@@ -10686,7 +12262,7 @@ typedef struct sqlite3_changegroup sqlite3_changegroup;
**
- Zero or more changesets (or patchsets) are added to the object
** by calling sqlite3changegroup_add().
**
-**
- The result of combining all input changesets together is obtained
+**
- The result of combining all input changesets together is obtained
** by the application via a call to sqlite3changegroup_output().
**
**
- The object is deleted using a call to sqlite3changegroup_delete().
@@ -10695,18 +12271,50 @@ typedef struct sqlite3_changegroup sqlite3_changegroup;
** Any number of calls to add() and output() may be made between the calls to
** new() and delete(), and in any order.
**
-** As well as the regular sqlite3changegroup_add() and
+** As well as the regular sqlite3changegroup_add() and
** sqlite3changegroup_output() functions, also available are the streaming
** versions sqlite3changegroup_add_strm() and sqlite3changegroup_output_strm().
*/
SQLITE_API int sqlite3changegroup_new(sqlite3_changegroup **pp);
+/*
+** CAPI3REF: Add a Schema to a Changegroup
+** METHOD: sqlite3_changegroup_schema
+**
+** This method may be used to optionally enforce the rule that the changesets
+** added to the changegroup handle must match the schema of database zDb
+** ("main", "temp", or the name of an attached database). If
+** sqlite3changegroup_add() is called to add a changeset that is not compatible
+** with the configured schema, SQLITE_SCHEMA is returned and the changegroup
+** object is left in an undefined state.
+**
+** A changeset schema is considered compatible with the database schema in
+** the same way as for sqlite3changeset_apply(). Specifically, for each
+** table in the changeset, there exists a database table with:
+**
+**
+** - The name identified by the changeset, and
+**
- at least as many columns as recorded in the changeset, and
+**
- the primary key columns in the same position as recorded in
+** the changeset.
+**
+**
+** The output of the changegroup object always has the same schema as the
+** database nominated using this function. In cases where changesets passed
+** to sqlite3changegroup_add() have fewer columns than the corresponding table
+** in the database schema, these are filled in using the default column
+** values from the database schema. This makes it possible to combined
+** changesets that have different numbers of columns for a single table
+** within a changegroup, provided that they are otherwise compatible.
+*/
+SQLITE_API int sqlite3changegroup_schema(sqlite3_changegroup*, sqlite3*, const char *zDb);
+
/*
** CAPI3REF: Add A Changeset To A Changegroup
** METHOD: sqlite3_changegroup
**
** Add all changes within the changeset (or patchset) in buffer pData (size
-** nData bytes) to the changegroup.
+** nData bytes) to the changegroup.
**
** If the buffer contains a patchset, then all prior calls to this function
** on the same changegroup object must also have specified patchsets. Or, if
@@ -10733,7 +12341,7 @@ SQLITE_API int sqlite3changegroup_new(sqlite3_changegroup **pp);
** changeset was recorded immediately after the changesets already
** added to the changegroup.
** | INSERT | UPDATE |
-** The INSERT change remains in the changegroup. The values in the
+** The INSERT change remains in the changegroup. The values in the
** INSERT change are modified as if the row was inserted by the
** existing change and then updated according to the new change.
** | | INSERT | DELETE |
@@ -10744,17 +12352,17 @@ SQLITE_API int sqlite3changegroup_new(sqlite3_changegroup **pp);
** changeset was recorded immediately after the changesets already
** added to the changegroup.
** | | UPDATE | UPDATE |
-** The existing UPDATE remains within the changegroup. It is amended
-** so that the accompanying values are as if the row was updated once
+** The existing UPDATE remains within the changegroup. It is amended
+** so that the accompanying values are as if the row was updated once
** by the existing change and then again by the new change.
** | | UPDATE | DELETE |
** The existing UPDATE is replaced by the new DELETE within the
** changegroup.
** | | DELETE | INSERT |
** If one or more of the column values in the row inserted by the
-** new change differ from those in the row deleted by the existing
+** new change differ from those in the row deleted by the existing
** change, the existing DELETE is replaced by an UPDATE within the
-** changegroup. Otherwise, if the inserted row is exactly the same
+** changegroup. Otherwise, if the inserted row is exactly the same
** as the deleted row, the existing DELETE is simply discarded.
** | | DELETE | UPDATE |
** The new change is ignored. This case does not occur if the new
@@ -10769,16 +12377,45 @@ SQLITE_API int sqlite3changegroup_new(sqlite3_changegroup **pp);
** If the new changeset contains changes to a table that is already present
** in the changegroup, then the number of columns and the position of the
** primary key columns for the table must be consistent. If this is not the
-** case, this function fails with SQLITE_SCHEMA. If the input changeset
-** appears to be corrupt and the corruption is detected, SQLITE_CORRUPT is
-** returned. Or, if an out-of-memory condition occurs during processing, this
-** function returns SQLITE_NOMEM. In all cases, if an error occurs the state
-** of the final contents of the changegroup is undefined.
+** case, this function fails with SQLITE_SCHEMA. Except, if the changegroup
+** object has been configured with a database schema using the
+** sqlite3changegroup_schema() API, then it is possible to combine changesets
+** with different numbers of columns for a single table, provided that
+** they are otherwise compatible.
**
-** If no error occurs, SQLITE_OK is returned.
+** If the input changeset appears to be corrupt and the corruption is
+** detected, SQLITE_CORRUPT is returned. Or, if an out-of-memory condition
+** occurs during processing, this function returns SQLITE_NOMEM.
+**
+** In all cases, if an error occurs the state of the final contents of the
+** changegroup is undefined. If no error occurs, SQLITE_OK is returned.
*/
SQLITE_API int sqlite3changegroup_add(sqlite3_changegroup*, int nData, void *pData);
+/*
+** CAPI3REF: Add A Single Change To A Changegroup
+** METHOD: sqlite3_changegroup
+**
+** This function adds the single change currently indicated by the iterator
+** passed as the second argument to the changegroup object. The rules for
+** adding the change are just as described for [sqlite3changegroup_add()].
+**
+** If the change is successfully added to the changegroup, SQLITE_OK is
+** returned. Otherwise, an SQLite error code is returned.
+**
+** The iterator must point to a valid entry when this function is called.
+** If it does not, SQLITE_ERROR is returned and no change is added to the
+** changegroup. Additionally, the iterator must not have been opened with
+** the SQLITE_CHANGESETAPPLY_INVERT flag. In this case SQLITE_ERROR is also
+** returned.
+*/
+SQLITE_API int sqlite3changegroup_add_change(
+ sqlite3_changegroup*,
+ sqlite3_changeset_iter*
+);
+
+
+
/*
** CAPI3REF: Obtain A Composite Changeset From A Changegroup
** METHOD: sqlite3_changegroup
@@ -10799,7 +12436,7 @@ SQLITE_API int sqlite3changegroup_add(sqlite3_changegroup*, int nData, void *pDa
**
** If an error occurs, an SQLite error code is returned and the output
** variables (*pnData) and (*ppData) are set to 0. Otherwise, SQLITE_OK
-** is returned and the output variables are set to the size of and a
+** is returned and the output variables are set to the size of and a
** pointer to the output buffer, respectively. In this case it is the
** responsibility of the caller to eventually free the buffer using a
** call to sqlite3_free().
@@ -10821,27 +12458,45 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
**
** Apply a changeset or patchset to a database. These functions attempt to
** update the "main" database attached to handle db with the changes found in
-** the changeset passed via the second and third arguments.
+** the changeset passed via the second and third arguments.
+**
+** All changes made by these functions are enclosed in a savepoint transaction.
+** If any other error (aside from a constraint failure when attempting to
+** write to the target database) occurs, then the savepoint transaction is
+** rolled back, restoring the target database to its original state, and an
+** SQLite error code returned. Additionally, starting with version 3.51.0,
+** an error code and error message that may be accessed using the
+** [sqlite3_errcode()] and [sqlite3_errmsg()] APIs are left in the database
+** handle.
**
** The fourth argument (xFilter) passed to these functions is the "filter
-** callback". If it is not NULL, then for each table affected by at least one
-** change in the changeset, the filter callback is invoked with
-** the table name as the second argument, and a copy of the context pointer
-** passed as the sixth argument as the first. If the "filter callback"
-** returns zero, then no attempt is made to apply any changes to the table.
-** Otherwise, if the return value is non-zero or the xFilter argument to
-** is NULL, all changes related to the table are attempted.
-**
-** For each table that is not excluded by the filter callback, this function
-** tests that the target database contains a compatible table. A table is
+** callback". This may be passed NULL, in which case all changes in the
+** changeset are applied to the database. For sqlite3changeset_apply() and
+** sqlite3_changeset_apply_v2(), if it is not NULL, then it is invoked once
+** for each table affected by at least one change in the changeset. In this
+** case the table name is passed as the second argument, and a copy of
+** the context pointer passed as the sixth argument to apply() or apply_v2()
+** as the first. If the "filter callback" returns zero, then no attempt is
+** made to apply any changes to the table. Otherwise, if the return value is
+** non-zero, all changes related to the table are attempted.
+**
+** For sqlite3_changeset_apply_v3(), the xFilter callback is invoked once
+** per change. The second argument in this case is an sqlite3_changeset_iter
+** that may be queried using the usual APIs for the details of the current
+** change. If the "filter callback" returns zero in this case, then no attempt
+** is made to apply the current change. If it returns non-zero, the change
+** is applied.
+**
+** For each table that is not excluded by the filter callback, this function
+** tests that the target database contains a compatible table. A table is
** considered compatible if all of the following are true:
**
**
-** - The table has the same name as the name recorded in the
+**
- The table has the same name as the name recorded in the
** changeset, and
-**
- The table has at least as many columns as recorded in the
+**
- The table has at least as many columns as recorded in the
** changeset, and
-**
- The table has primary key columns in the same position as
+**
- The table has primary key columns in the same position as
** recorded in the changeset.
**
**
@@ -10850,35 +12505,35 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
** via the sqlite3_log() mechanism with the error code SQLITE_SCHEMA. At most
** one such warning is issued for each table in the changeset.
**
-** For each change for which there is a compatible table, an attempt is made
-** to modify the table contents according to the UPDATE, INSERT or DELETE
-** change. If a change cannot be applied cleanly, the conflict handler
-** function passed as the fifth argument to sqlite3changeset_apply() may be
-** invoked. A description of exactly when the conflict handler is invoked for
-** each type of change is below.
+** For each change for which there is a compatible table, an attempt is made
+** to modify the table contents according to each UPDATE, INSERT or DELETE
+** change that is not excluded by a filter callback. If a change cannot be
+** applied cleanly, the conflict handler function passed as the fifth argument
+** to sqlite3changeset_apply() may be invoked. A description of exactly when
+** the conflict handler is invoked for each type of change is below.
**
** Unlike the xFilter argument, xConflict may not be passed NULL. The results
** of passing anything other than a valid function pointer as the xConflict
** argument are undefined.
**
** Each time the conflict handler function is invoked, it must return one
-** of [SQLITE_CHANGESET_OMIT], [SQLITE_CHANGESET_ABORT] or
+** of [SQLITE_CHANGESET_OMIT], [SQLITE_CHANGESET_ABORT] or
** [SQLITE_CHANGESET_REPLACE]. SQLITE_CHANGESET_REPLACE may only be returned
** if the second argument passed to the conflict handler is either
** SQLITE_CHANGESET_DATA or SQLITE_CHANGESET_CONFLICT. If the conflict-handler
** returns an illegal value, any changes already made are rolled back and
-** the call to sqlite3changeset_apply() returns SQLITE_MISUSE. Different
+** the call to sqlite3changeset_apply() returns SQLITE_MISUSE. Different
** actions are taken by sqlite3changeset_apply() depending on the value
** returned by each invocation of the conflict-handler function. Refer to
-** the documentation for the three
+** the documentation for the three
** [SQLITE_CHANGESET_OMIT|available return values] for details.
**
**
** - DELETE Changes
-
-** For each DELETE change, the function checks if the target database
-** contains a row with the same primary key value (or values) as the
-** original row values stored in the changeset. If it does, and the values
-** stored in all non-primary key columns also match the values stored in
+** For each DELETE change, the function checks if the target database
+** contains a row with the same primary key value (or values) as the
+** original row values stored in the changeset. If it does, and the values
+** stored in all non-primary key columns also match the values stored in
** the changeset the row is deleted from the target database.
**
** If a row with matching primary key values is found, but one or more of
@@ -10907,22 +12562,22 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
** database table, the trailing fields are populated with their default
** values.
**
-** If the attempt to insert the row fails because the database already
+** If the attempt to insert the row fails because the database already
** contains a row with the same primary key values, the conflict handler
-** function is invoked with the second argument set to
+** function is invoked with the second argument set to
** [SQLITE_CHANGESET_CONFLICT].
**
** If the attempt to insert the row fails because of some other constraint
-** violation (e.g. NOT NULL or UNIQUE), the conflict handler function is
+** violation (e.g. NOT NULL or UNIQUE), the conflict handler function is
** invoked with the second argument set to [SQLITE_CHANGESET_CONSTRAINT].
-** This includes the case where the INSERT operation is re-attempted because
-** an earlier call to the conflict handler function returned
+** This includes the case where the INSERT operation is re-attempted because
+** an earlier call to the conflict handler function returned
** [SQLITE_CHANGESET_REPLACE].
**
**
- UPDATE Changes
-
-** For each UPDATE change, the function checks if the target database
-** contains a row with the same primary key value (or values) as the
-** original row values stored in the changeset. If it does, and the values
+** For each UPDATE change, the function checks if the target database
+** contains a row with the same primary key value (or values) as the
+** original row values stored in the changeset. If it does, and the values
** stored in all modified non-primary key columns also match the values
** stored in the changeset the row is updated within the target database.
**
@@ -10938,12 +12593,12 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
** the conflict-handler function is invoked with [SQLITE_CHANGESET_NOTFOUND]
** passed as the second argument.
**
-** If the UPDATE operation is attempted, but SQLite returns
-** SQLITE_CONSTRAINT, the conflict-handler function is invoked with
+** If the UPDATE operation is attempted, but SQLite returns
+** SQLITE_CONSTRAINT, the conflict-handler function is invoked with
** [SQLITE_CHANGESET_CONSTRAINT] passed as the second argument.
-** This includes the case where the UPDATE operation is attempted after
+** This includes the case where the UPDATE operation is attempted after
** an earlier call to the conflict handler function returned
-** [SQLITE_CHANGESET_REPLACE].
+** [SQLITE_CHANGESET_REPLACE].
**
**
** It is safe to execute SQL statements, including those that write to the
@@ -10951,15 +12606,9 @@ SQLITE_API void sqlite3changegroup_delete(sqlite3_changegroup*);
** This can be used to further customize the application's conflict
** resolution strategy.
**
-** All changes made by these functions are enclosed in a savepoint transaction.
-** If any other error (aside from a constraint failure when attempting to
-** write to the target database) occurs, then the savepoint transaction is
-** rolled back, restoring the target database to its original state, and an
-** SQLite error code returned.
-**
** If the output parameters (ppRebase) and (pnRebase) are non-NULL and
** the input is a changeset (not a patchset), then sqlite3changeset_apply_v2()
-** may set (*ppRebase) to point to a "rebase" that may be used with the
+** may set (*ppRebase) to point to a "rebase" that may be used with the
** sqlite3_rebaser APIs buffer before returning. In this case (*pnRebase)
** is set to the size of the buffer in bytes. It is the responsibility of the
** caller to eventually free any such buffer using sqlite3_free(). The buffer
@@ -11006,6 +12655,23 @@ SQLITE_API int sqlite3changeset_apply_v2(
void **ppRebase, int *pnRebase, /* OUT: Rebase data */
int flags /* SESSION_CHANGESETAPPLY_* flags */
);
+SQLITE_API int sqlite3changeset_apply_v3(
+ sqlite3 *db, /* Apply change to "main" db of this handle */
+ int nChangeset, /* Size of changeset in bytes */
+ void *pChangeset, /* Changeset blob */
+ int(*xFilter)(
+ void *pCtx, /* Copy of sixth arg to _apply() */
+ sqlite3_changeset_iter *p /* Handle describing change */
+ ),
+ int(*xConflict)(
+ void *pCtx, /* Copy of sixth arg to _apply() */
+ int eConflict, /* DATA, MISSING, CONFLICT, CONSTRAINT */
+ sqlite3_changeset_iter *p /* Handle describing change and conflict */
+ ),
+ void *pCtx, /* First argument passed to xConflict */
+ void **ppRebase, int *pnRebase, /* OUT: Rebase data */
+ int flags /* SESSION_CHANGESETAPPLY_* flags */
+);
/*
** CAPI3REF: Flags for sqlite3changeset_apply_v2
@@ -11020,18 +12686,39 @@ SQLITE_API int sqlite3changeset_apply_v2(
** SAVEPOINT is committed if the changeset or patchset is successfully
** applied, or rolled back if an error occurs. Specifying this flag
** causes the sessions module to omit this savepoint. In this case, if the
-** caller has an open transaction or savepoint when apply_v2() is called,
+** caller has an open transaction or savepoint when apply_v2() is called,
** it may revert the partially applied changeset by rolling it back.
**
** - SQLITE_CHANGESETAPPLY_INVERT
-
** Invert the changeset before applying it. This is equivalent to inverting
** a changeset using sqlite3changeset_invert() before applying it. It is
** an error to specify this flag with a patchset.
+**
+**
- SQLITE_CHANGESETAPPLY_IGNORENOOP
-
+** Do not invoke the conflict handler callback for any changes that
+** would not actually modify the database even if they were applied.
+** Specifically, this means that the conflict handler is not invoked
+** for:
+**
+** - a delete change if the row being deleted cannot be found,
+**
- an update change if the modified fields are already set to
+** their new values in the conflicting row, or
+**
- an insert change if all fields of the conflicting row match
+** the row being inserted.
+**
+**
+** - SQLITE_CHANGESETAPPLY_FKNOACTION
-
+** If this flag it set, then all foreign key constraints in the target
+** database behave as if they were declared with "ON UPDATE NO ACTION ON
+** DELETE NO ACTION", even if they are actually CASCADE, RESTRICT, SET NULL
+** or SET DEFAULT.
*/
#define SQLITE_CHANGESETAPPLY_NOSAVEPOINT 0x0001
#define SQLITE_CHANGESETAPPLY_INVERT 0x0002
+#define SQLITE_CHANGESETAPPLY_IGNORENOOP 0x0004
+#define SQLITE_CHANGESETAPPLY_FKNOACTION 0x0008
-/*
+/*
** CAPI3REF: Constants Passed To The Conflict Handler
**
** Values that may be passed as the second argument to a conflict-handler.
@@ -11040,32 +12727,32 @@ SQLITE_API int sqlite3changeset_apply_v2(
**
- SQLITE_CHANGESET_DATA
-
** The conflict handler is invoked with CHANGESET_DATA as the second argument
** when processing a DELETE or UPDATE change if a row with the required
-** PRIMARY KEY fields is present in the database, but one or more other
-** (non primary-key) fields modified by the update do not contain the
+** PRIMARY KEY fields is present in the database, but one or more other
+** (non primary-key) fields modified by the update do not contain the
** expected "before" values.
-**
+**
** The conflicting row, in this case, is the database row with the matching
** primary key.
-**
+**
**
- SQLITE_CHANGESET_NOTFOUND
-
** The conflict handler is invoked with CHANGESET_NOTFOUND as the second
** argument when processing a DELETE or UPDATE change if a row with the
** required PRIMARY KEY fields is not present in the database.
-**
+**
** There is no conflicting row in this case. The results of invoking the
** sqlite3changeset_conflict() API are undefined.
-**
+**
**
- SQLITE_CHANGESET_CONFLICT
-
** CHANGESET_CONFLICT is passed as the second argument to the conflict
-** handler while processing an INSERT change if the operation would result
+** handler while processing an INSERT change if the operation would result
** in duplicate primary key values.
-**
+**
** The conflicting row in this case is the database row with the matching
** primary key.
**
**
- SQLITE_CHANGESET_FOREIGN_KEY
-
** If foreign key handling is enabled, and applying a changeset leaves the
-** database in a state containing foreign key violations, the conflict
+** database in a state containing foreign key violations, the conflict
** handler is invoked with CHANGESET_FOREIGN_KEY as the second argument
** exactly once before the changeset is committed. If the conflict handler
** returns CHANGESET_OMIT, the changes, including those that caused the
@@ -11075,12 +12762,12 @@ SQLITE_API int sqlite3changeset_apply_v2(
** No current or conflicting row information is provided. The only function
** it is possible to call on the supplied sqlite3_changeset_iter handle
** is sqlite3changeset_fk_conflicts().
-**
+**
**
- SQLITE_CHANGESET_CONSTRAINT
-
-** If any other constraint violation occurs while applying a change (i.e.
-** a UNIQUE, CHECK or NOT NULL constraint), the conflict handler is
+** If any other constraint violation occurs while applying a change (i.e.
+** a UNIQUE, CHECK or NOT NULL constraint), the conflict handler is
** invoked with CHANGESET_CONSTRAINT as the second argument.
-**
+**
** There is no conflicting row in this case. The results of invoking the
** sqlite3changeset_conflict() API are undefined.
**
@@ -11092,7 +12779,7 @@ SQLITE_API int sqlite3changeset_apply_v2(
#define SQLITE_CHANGESET_CONSTRAINT 4
#define SQLITE_CHANGESET_FOREIGN_KEY 5
-/*
+/*
** CAPI3REF: Constants Returned By The Conflict Handler
**
** A conflict handler callback must return one of the following three values.
@@ -11100,13 +12787,13 @@ SQLITE_API int sqlite3changeset_apply_v2(
**
** - SQLITE_CHANGESET_OMIT
-
** If a conflict handler returns this value no special action is taken. The
-** change that caused the conflict is not applied. The session module
+** change that caused the conflict is not applied. The session module
** continues to the next change in the changeset.
**
**
- SQLITE_CHANGESET_REPLACE
-
** This value may only be returned if the second argument to the conflict
** handler was SQLITE_CHANGESET_DATA or SQLITE_CHANGESET_CONFLICT. If this
-** is not the case, any changes applied so far are rolled back and the
+** is not the case, any changes applied so far are rolled back and the
** call to sqlite3changeset_apply() returns SQLITE_MISUSE.
**
** If CHANGESET_REPLACE is returned by an SQLITE_CHANGESET_DATA conflict
@@ -11119,7 +12806,7 @@ SQLITE_API int sqlite3changeset_apply_v2(
** the original row is restored to the database before continuing.
**
**
- SQLITE_CHANGESET_ABORT
-
-** If this value is returned, any changes applied so far are rolled back
+** If this value is returned, any changes applied so far are rolled back
** and the call to sqlite3changeset_apply() returns SQLITE_ABORT.
**
*/
@@ -11127,20 +12814,20 @@ SQLITE_API int sqlite3changeset_apply_v2(
#define SQLITE_CHANGESET_REPLACE 1
#define SQLITE_CHANGESET_ABORT 2
-/*
+/*
** CAPI3REF: Rebasing changesets
** EXPERIMENTAL
**
** Suppose there is a site hosting a database in state S0. And that
** modifications are made that move that database to state S1 and a
** changeset recorded (the "local" changeset). Then, a changeset based
-** on S0 is received from another site (the "remote" changeset) and
-** applied to the database. The database is then in state
+** on S0 is received from another site (the "remote" changeset) and
+** applied to the database. The database is then in state
** (S1+"remote"), where the exact state depends on any conflict
** resolution decisions (OMIT or REPLACE) made while applying "remote".
-** Rebasing a changeset is to update it to take those conflict
+** Rebasing a changeset is to update it to take those conflict
** resolution decisions into account, so that the same conflicts
-** do not have to be resolved elsewhere in the network.
+** do not have to be resolved elsewhere in the network.
**
** For example, if both the local and remote changesets contain an
** INSERT of the same key on "CREATE TABLE t1(a PRIMARY KEY, b)":
@@ -11159,7 +12846,7 @@ SQLITE_API int sqlite3changeset_apply_v2(
**
**
** - Local INSERT
-
-** This may only conflict with a remote INSERT. If the conflict
+** This may only conflict with a remote INSERT. If the conflict
** resolution was OMIT, then add an UPDATE change to the rebased
** changeset. Or, if the conflict resolution was REPLACE, add
** nothing to the rebased changeset.
@@ -11183,12 +12870,12 @@ SQLITE_API int sqlite3changeset_apply_v2(
** the old.* values are rebased using the new.* values in the remote
** change. Or, if the resolution is REPLACE, then the change is copied
** into the rebased changeset with updates to columns also updated by
-** the conflicting remote UPDATE removed. If this means no columns would
+** the conflicting remote UPDATE removed. If this means no columns would
** be updated, the change is omitted.
**
**
-** A local change may be rebased against multiple remote changes
-** simultaneously. If a single key is modified by multiple remote
+** A local change may be rebased against multiple remote changes
+** simultaneously. If a single key is modified by multiple remote
** changesets, they are combined as follows before the local changeset
** is rebased:
**
@@ -11201,10 +12888,10 @@ SQLITE_API int sqlite3changeset_apply_v2(
** of the OMIT resolutions.
**
**
-** Note that conflict resolutions from multiple remote changesets are
-** combined on a per-field basis, not per-row. This means that in the
-** case of multiple remote UPDATE operations, some fields of a single
-** local change may be rebased for REPLACE while others are rebased for
+** Note that conflict resolutions from multiple remote changesets are
+** combined on a per-field basis, not per-row. This means that in the
+** case of multiple remote UPDATE operations, some fields of a single
+** local change may be rebased for REPLACE while others are rebased for
** OMIT.
**
** In order to rebase a local changeset, the remote changeset must first
@@ -11212,7 +12899,7 @@ SQLITE_API int sqlite3changeset_apply_v2(
** the buffer of rebase information captured. Then:
**
**
-** - An sqlite3_rebaser object is created by calling
+**
- An sqlite3_rebaser object is created by calling
** sqlite3rebaser_create().
**
- The new object is configured with the rebase buffer obtained from
** sqlite3changeset_apply_v2() by calling sqlite3rebaser_configure().
@@ -11233,8 +12920,8 @@ typedef struct sqlite3_rebaser sqlite3_rebaser;
**
** Allocate a new changeset rebaser object. If successful, set (*ppNew) to
** point to the new object and return SQLITE_OK. Otherwise, if an error
-** occurs, return an SQLite error code (e.g. SQLITE_NOMEM) and set (*ppNew)
-** to NULL.
+** occurs, return an SQLite error code (e.g. SQLITE_NOMEM) and set (*ppNew)
+** to NULL.
*/
SQLITE_API int sqlite3rebaser_create(sqlite3_rebaser **ppNew);
@@ -11248,9 +12935,9 @@ SQLITE_API int sqlite3rebaser_create(sqlite3_rebaser **ppNew);
** sqlite3changeset_apply_v2().
*/
SQLITE_API int sqlite3rebaser_configure(
- sqlite3_rebaser*,
+ sqlite3_rebaser*,
int nRebase, const void *pRebase
-);
+);
/*
** CAPI3REF: Rebase a changeset
@@ -11260,7 +12947,7 @@ SQLITE_API int sqlite3rebaser_configure(
** in size. This function allocates and populates a buffer with a copy
** of the changeset rebased according to the configuration of the
** rebaser object passed as the first argument. If successful, (*ppOut)
-** is set to point to the new buffer containing the rebased changeset and
+** is set to point to the new buffer containing the rebased changeset and
** (*pnOut) to its size in bytes and SQLITE_OK returned. It is the
** responsibility of the caller to eventually free the new buffer using
** sqlite3_free(). Otherwise, if an error occurs, (*ppOut) and (*pnOut)
@@ -11268,8 +12955,8 @@ SQLITE_API int sqlite3rebaser_configure(
*/
SQLITE_API int sqlite3rebaser_rebase(
sqlite3_rebaser*,
- int nIn, const void *pIn,
- int *pnOut, void **ppOut
+ int nIn, const void *pIn,
+ int *pnOut, void **ppOut
);
/*
@@ -11280,30 +12967,30 @@ SQLITE_API int sqlite3rebaser_rebase(
** should be one call to this function for each successful invocation
** of sqlite3rebaser_create().
*/
-SQLITE_API void sqlite3rebaser_delete(sqlite3_rebaser *p);
+SQLITE_API void sqlite3rebaser_delete(sqlite3_rebaser *p);
/*
** CAPI3REF: Streaming Versions of API functions.
**
-** The six streaming API xxx_strm() functions serve similar purposes to the
+** The six streaming API xxx_strm() functions serve similar purposes to the
** corresponding non-streaming API functions:
**
**
** | Streaming function | Non-streaming equivalent |
-**
|---|
| sqlite3changeset_apply_strm | [sqlite3changeset_apply]
-** | | sqlite3changeset_apply_strm_v2 | [sqlite3changeset_apply_v2]
-** | | sqlite3changeset_concat_strm | [sqlite3changeset_concat]
-** | | sqlite3changeset_invert_strm | [sqlite3changeset_invert]
-** | | sqlite3changeset_start_strm | [sqlite3changeset_start]
-** | | sqlite3session_changeset_strm | [sqlite3session_changeset]
-** | | sqlite3session_patchset_strm | [sqlite3session_patchset]
+** | | sqlite3changeset_apply_strm | [sqlite3changeset_apply]
+** | | sqlite3changeset_apply_strm_v2 | [sqlite3changeset_apply_v2]
+** | | sqlite3changeset_concat_strm | [sqlite3changeset_concat]
+** | | sqlite3changeset_invert_strm | [sqlite3changeset_invert]
+** | | sqlite3changeset_start_strm | [sqlite3changeset_start]
+** | | sqlite3session_changeset_strm | [sqlite3session_changeset]
+** | | sqlite3session_patchset_strm | [sqlite3session_patchset]
** |
**
** Non-streaming functions that accept changesets (or patchsets) as input
-** require that the entire changeset be stored in a single buffer in memory.
-** Similarly, those that return a changeset or patchset do so by returning
-** a pointer to a single large buffer allocated using sqlite3_malloc().
-** Normally this is convenient. However, if an application running in a
+** require that the entire changeset be stored in a single buffer in memory.
+** Similarly, those that return a changeset or patchset do so by returning
+** a pointer to a single large buffer allocated using sqlite3_malloc().
+** Normally this is convenient. However, if an application running in a
** low-memory environment is required to handle very large changesets, the
** large contiguous memory allocations required can become onerous.
**
@@ -11325,12 +13012,12 @@ SQLITE_API void sqlite3rebaser_delete(sqlite3_rebaser *p);
**
**
** Each time the xInput callback is invoked by the sessions module, the first
-** argument passed is a copy of the supplied pIn context pointer. The second
-** argument, pData, points to a buffer (*pnData) bytes in size. Assuming no
-** error occurs the xInput method should copy up to (*pnData) bytes of data
-** into the buffer and set (*pnData) to the actual number of bytes copied
-** before returning SQLITE_OK. If the input is completely exhausted, (*pnData)
-** should be set to zero to indicate this. Or, if an error occurs, an SQLite
+** argument passed is a copy of the supplied pIn context pointer. The second
+** argument, pData, points to a buffer (*pnData) bytes in size. Assuming no
+** error occurs the xInput method should copy up to (*pnData) bytes of data
+** into the buffer and set (*pnData) to the actual number of bytes copied
+** before returning SQLITE_OK. If the input is completely exhausted, (*pnData)
+** should be set to zero to indicate this. Or, if an error occurs, an SQLite
** error code should be returned. In all cases, if an xInput callback returns
** an error, all processing is abandoned and the streaming API function
** returns a copy of the error code to the caller.
@@ -11338,7 +13025,7 @@ SQLITE_API void sqlite3rebaser_delete(sqlite3_rebaser *p);
** In the case of sqlite3changeset_start_strm(), the xInput callback may be
** invoked by the sessions module at any point during the lifetime of the
** iterator. If such an xInput callback returns an error, the iterator enters
-** an error state, whereby all subsequent calls to iterator functions
+** an error state, whereby all subsequent calls to iterator functions
** immediately fail with the same error code as returned by xInput.
**
** Similarly, streaming API functions that return changesets (or patchsets)
@@ -11368,7 +13055,7 @@ SQLITE_API void sqlite3rebaser_delete(sqlite3_rebaser *p);
** is immediately abandoned and the streaming API function returns a copy
** of the xOutput error code to the application.
**
-** The sessions module never invokes an xOutput callback with the third
+** The sessions module never invokes an xOutput callback with the third
** parameter set to a value less than or equal to zero. Other than this,
** no guarantees are made as to the size of the chunks of data returned.
*/
@@ -11404,6 +13091,23 @@ SQLITE_API int sqlite3changeset_apply_v2_strm(
void **ppRebase, int *pnRebase,
int flags
);
+SQLITE_API int sqlite3changeset_apply_v3_strm(
+ sqlite3 *db, /* Apply change to "main" db of this handle */
+ int (*xInput)(void *pIn, void *pData, int *pnData), /* Input function */
+ void *pIn, /* First arg for xInput */
+ int(*xFilter)(
+ void *pCtx, /* Copy of sixth arg to _apply() */
+ sqlite3_changeset_iter *p
+ ),
+ int(*xConflict)(
+ void *pCtx, /* Copy of sixth arg to _apply() */
+ int eConflict, /* DATA, MISSING, CONFLICT, CONSTRAINT */
+ sqlite3_changeset_iter *p /* Handle describing change and conflict */
+ ),
+ void *pCtx, /* First argument passed to xConflict */
+ void **ppRebase, int *pnRebase,
+ int flags
+);
SQLITE_API int sqlite3changeset_concat_strm(
int (*xInputA)(void *pIn, void *pData, int *pnData),
void *pInA,
@@ -11439,12 +13143,12 @@ SQLITE_API int sqlite3session_patchset_strm(
int (*xOutput)(void *pOut, const void *pData, int nData),
void *pOut
);
-SQLITE_API int sqlite3changegroup_add_strm(sqlite3_changegroup*,
+SQLITE_API int sqlite3changegroup_add_strm(sqlite3_changegroup*,
int (*xInput)(void *pIn, void *pData, int *pnData),
void *pIn
);
SQLITE_API int sqlite3changegroup_output_strm(sqlite3_changegroup*,
- int (*xOutput)(void *pOut, const void *pData, int nData),
+ int (*xOutput)(void *pOut, const void *pData, int nData),
void *pOut
);
SQLITE_API int sqlite3rebaser_rebase_strm(
@@ -11459,16 +13163,16 @@ SQLITE_API int sqlite3rebaser_rebase_strm(
** CAPI3REF: Configure global parameters
**
** The sqlite3session_config() interface is used to make global configuration
-** changes to the sessions module in order to tune it to the specific needs
+** changes to the sessions module in order to tune it to the specific needs
** of the application.
**
** The sqlite3session_config() interface is not threadsafe. If it is invoked
** while any other thread is inside any other sessions method then the
** results are undefined. Furthermore, if it is invoked after any sessions
-** related objects have been created, the results are also undefined.
+** related objects have been created, the results are also undefined.
**
** The first argument to the sqlite3session_config() function must be one
-** of the SQLITE_SESSION_CONFIG_XXX constants defined below. The
+** of the SQLITE_SESSION_CONFIG_XXX constants defined below. The
** interpretation of the (void*) value passed as the second parameter and
** the effect of calling this function depends on the value of the first
** parameter.
@@ -11518,7 +13222,7 @@ SQLITE_API int sqlite3session_config(int op, void *pArg);
**
******************************************************************************
**
-** Interfaces to extend FTS5. Using the interfaces defined in this file,
+** Interfaces to extend FTS5. Using the interfaces defined in this file,
** FTS5 may be extended with:
**
** * custom tokenizers, and
@@ -11562,19 +13266,19 @@ struct Fts5PhraseIter {
** EXTENSION API FUNCTIONS
**
** xUserData(pFts):
-** Return a copy of the context pointer the extension function was
-** registered with.
+** Return a copy of the pUserData pointer passed to the xCreateFunction()
+** API when the extension function was registered.
**
** xColumnTotalSize(pFts, iCol, pnToken):
** If parameter iCol is less than zero, set output variable *pnToken
** to the total number of tokens in the FTS5 table. Or, if iCol is
** non-negative but less than the number of columns in the table, return
-** the total number of tokens in column iCol, considering all rows in
+** the total number of tokens in column iCol, considering all rows in
** the FTS5 table.
**
** If parameter iCol is greater than or equal to the number of columns
** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g.
-** an OOM condition or IO error), an appropriate SQLite error code is
+** an OOM condition or IO error), an appropriate SQLite error code is
** returned.
**
** xColumnCount(pFts):
@@ -11588,15 +13292,18 @@ struct Fts5PhraseIter {
**
** If parameter iCol is greater than or equal to the number of columns
** in the table, SQLITE_RANGE is returned. Or, if an error occurs (e.g.
-** an OOM condition or IO error), an appropriate SQLite error code is
+** an OOM condition or IO error), an appropriate SQLite error code is
** returned.
**
** This function may be quite inefficient if used with an FTS5 table
** created with the "columnsize=0" option.
**
** xColumnText:
-** This function attempts to retrieve the text of column iCol of the
-** current document. If successful, (*pz) is set to point to a buffer
+** If parameter iCol is less than zero, or greater than or equal to the
+** number of columns in the table, SQLITE_RANGE is returned.
+**
+** Otherwise, this function attempts to retrieve the text of column iCol of
+** the current document. If successful, (*pz) is set to point to a buffer
** containing the text in utf-8 encoding, (*pn) is set to the size in bytes
** (not characters) of the buffer and SQLITE_OK is returned. Otherwise,
** if an error occurs, an SQLite error code is returned and the final values
@@ -11606,8 +13313,10 @@ struct Fts5PhraseIter {
** Returns the number of phrases in the current query expression.
**
** xPhraseSize:
-** Returns the number of tokens in phrase iPhrase of the query. Phrases
-** are numbered starting from zero.
+** If parameter iCol is less than zero, or greater than or equal to the
+** number of phrases in the current query, as returned by xPhraseCount,
+** 0 is returned. Otherwise, this function returns the number of tokens in
+** phrase iPhrase of the query. Phrases are numbered starting from zero.
**
** xInstCount:
** Set *pnInst to the total number of occurrences of all phrases within
@@ -11615,23 +13324,24 @@ struct Fts5PhraseIter {
** an error code (i.e. SQLITE_NOMEM) if an error occurs.
**
** This API can be quite slow if used with an FTS5 table created with the
-** "detail=none" or "detail=column" option. If the FTS5 table is created
-** with either "detail=none" or "detail=column" and "content=" option
+** "detail=none" or "detail=column" option. If the FTS5 table is created
+** with either "detail=none" or "detail=column" and "content=" option
** (i.e. if it is a contentless table), then this API always returns 0.
**
** xInst:
** Query for the details of phrase match iIdx within the current row.
** Phrase matches are numbered starting from zero, so the iIdx argument
** should be greater than or equal to zero and smaller than the value
-** output by xInstCount().
+** output by xInstCount(). If iIdx is less than zero or greater than
+** or equal to the value returned by xInstCount(), SQLITE_RANGE is returned.
**
-** Usually, output parameter *piPhrase is set to the phrase number, *piCol
+** Otherwise, output parameter *piPhrase is set to the phrase number, *piCol
** to the column in which it occurs and *piOff the token offset of the
-** first token of the phrase. Returns SQLITE_OK if successful, or an error
-** code (i.e. SQLITE_NOMEM) if an error occurs.
+** first token of the phrase. SQLITE_OK is returned if successful, or an
+** error code (i.e. SQLITE_NOMEM) if an error occurs.
**
** This API can be quite slow if used with an FTS5 table created with the
-** "detail=none" or "detail=column" option.
+** "detail=none" or "detail=column" option.
**
** xRowid:
** Returns the rowid of the current row.
@@ -11647,13 +13357,17 @@ struct Fts5PhraseIter {
**
** with $p set to a phrase equivalent to the phrase iPhrase of the
** current query is executed. Any column filter that applies to
-** phrase iPhrase of the current query is included in $p. For each
-** row visited, the callback function passed as the fourth argument
-** is invoked. The context and API objects passed to the callback
+** phrase iPhrase of the current query is included in $p. For each
+** row visited, the callback function passed as the fourth argument
+** is invoked. The context and API objects passed to the callback
** function may be used to access the properties of each matched row.
-** Invoking Api.xUserData() returns a copy of the pointer passed as
+** Invoking Api.xUserData() returns a copy of the pointer passed as
** the third argument to pUserData.
**
+** If parameter iPhrase is less than zero, or greater than or equal to
+** the number of phrases in the query, as returned by xPhraseCount(),
+** this function returns SQLITE_RANGE.
+**
** If the callback function returns any value other than SQLITE_OK, the
** query is abandoned and the xQueryPhrase function returns immediately.
** If the returned value is SQLITE_DONE, xQueryPhrase returns SQLITE_OK.
@@ -11666,14 +13380,14 @@ struct Fts5PhraseIter {
**
** xSetAuxdata(pFts5, pAux, xDelete)
**
-** Save the pointer passed as the second argument as the extension function's
+** Save the pointer passed as the second argument as the extension function's
** "auxiliary data". The pointer may then be retrieved by the current or any
** future invocation of the same fts5 extension function made as part of
** the same MATCH query using the xGetAuxdata() API.
**
** Each extension function is allocated a single auxiliary data slot for
-** each FTS query (MATCH expression). If the extension function is invoked
-** more than once for a single FTS query, then all invocations share a
+** each FTS query (MATCH expression). If the extension function is invoked
+** more than once for a single FTS query, then all invocations share a
** single auxiliary data context.
**
** If there is already an auxiliary data pointer when this function is
@@ -11692,7 +13406,7 @@ struct Fts5PhraseIter {
**
** xGetAuxdata(pFts5, bClear)
**
-** Returns the current auxiliary data pointer for the fts5 extension
+** Returns the current auxiliary data pointer for the fts5 extension
** function. See the xSetAuxdata() method for details.
**
** If the bClear argument is non-zero, then the auxiliary data is cleared
@@ -11712,7 +13426,7 @@ struct Fts5PhraseIter {
** method, to iterate through all instances of a single query phrase within
** the current row. This is the same information as is accessible via the
** xInstCount/xInst APIs. While the xInstCount/xInst APIs are more convenient
-** to use, this API may be faster under some circumstances. To iterate
+** to use, this API may be faster under some circumstances. To iterate
** through instances of phrase iPhrase, use the following code:
**
** Fts5PhraseIter iter;
@@ -11730,11 +13444,15 @@ struct Fts5PhraseIter {
** xPhraseFirstColumn() and xPhraseNextColumn() as illustrated below).
**
** This API can be quite slow if used with an FTS5 table created with the
-** "detail=none" or "detail=column" option. If the FTS5 table is created
-** with either "detail=none" or "detail=column" and "content=" option
+** "detail=none" or "detail=column" option. If the FTS5 table is created
+** with either "detail=none" or "detail=column" and "content=" option
** (i.e. if it is a contentless table), then this API always iterates
** through an empty set (all calls to xPhraseFirst() set iCol to -1).
**
+** In all cases, matches are visited in (column ASC, offset ASC) order.
+** i.e. all those in column 0, sorted by offset, followed by those in
+** column 1, etc.
+**
** xPhraseNext()
** See xPhraseFirst above.
**
@@ -11755,22 +13473,93 @@ struct Fts5PhraseIter {
** }
**
** This API can be quite slow if used with an FTS5 table created with the
-** "detail=none" option. If the FTS5 table is created with either
-** "detail=none" "content=" option (i.e. if it is a contentless table),
-** then this API always iterates through an empty set (all calls to
+** "detail=none" option. If the FTS5 table is created with either
+** "detail=none" "content=" option (i.e. if it is a contentless table),
+** then this API always iterates through an empty set (all calls to
** xPhraseFirstColumn() set iCol to -1).
**
** The information accessed using this API and its companion
** xPhraseFirstColumn() may also be obtained using xPhraseFirst/xPhraseNext
** (or xInst/xInstCount). The chief advantage of this API is that it is
** significantly more efficient than those alternatives when used with
-** "detail=column" tables.
+** "detail=column" tables.
**
** xPhraseNextColumn()
** See xPhraseFirstColumn above.
+**
+** xQueryToken(pFts5, iPhrase, iToken, ppToken, pnToken)
+** This is used to access token iToken of phrase iPhrase of the current
+** query. Before returning, output parameter *ppToken is set to point
+** to a buffer containing the requested token, and *pnToken to the
+** size of this buffer in bytes.
+**
+** If iPhrase or iToken are less than zero, or if iPhrase is greater than
+** or equal to the number of phrases in the query as reported by
+** xPhraseCount(), or if iToken is equal to or greater than the number of
+** tokens in the phrase, SQLITE_RANGE is returned and *ppToken and *pnToken
+ are both zeroed.
+**
+** The output text is not a copy of the query text that specified the
+** token. It is the output of the tokenizer module. For tokendata=1
+** tables, this includes any embedded 0x00 and trailing data.
+**
+** xInstToken(pFts5, iIdx, iToken, ppToken, pnToken)
+** This is used to access token iToken of phrase hit iIdx within the
+** current row. If iIdx is less than zero or greater than or equal to the
+** value returned by xInstCount(), SQLITE_RANGE is returned. Otherwise,
+** output variable (*ppToken) is set to point to a buffer containing the
+** matching document token, and (*pnToken) to the size of that buffer in
+** bytes.
+**
+** The output text is not a copy of the document text that was tokenized.
+** It is the output of the tokenizer module. For tokendata=1 tables, this
+** includes any embedded 0x00 and trailing data.
+**
+** This API may be slow in some cases if the token identified by parameters
+** iIdx and iToken matched a prefix token in the query. In most cases, the
+** first call to this API for each prefix token in the query is forced
+** to scan the portion of the full-text index that matches the prefix
+** token to collect the extra data required by this API. If the prefix
+** token matches a large number of token instances in the document set,
+** this may be a performance problem.
+**
+** If the user knows in advance that a query may use this API for a
+** prefix token, FTS5 may be configured to collect all required data as part
+** of the initial querying of the full-text index, avoiding the second scan
+** entirely. This also causes prefix queries that do not use this API to
+** run more slowly and use more memory. FTS5 may be configured in this way
+** either on a per-table basis using the [FTS5 insttoken | 'insttoken']
+** option, or on a per-query basis using the
+** [fts5_insttoken | fts5_insttoken()] user function.
+**
+** This API can be quite slow if used with an FTS5 table created with the
+** "detail=none" or "detail=column" option.
+**
+** xColumnLocale(pFts5, iIdx, pzLocale, pnLocale)
+** If parameter iCol is less than zero, or greater than or equal to the
+** number of columns in the table, SQLITE_RANGE is returned.
+**
+** Otherwise, this function attempts to retrieve the locale associated
+** with column iCol of the current row. Usually, there is no associated
+** locale, and output parameters (*pzLocale) and (*pnLocale) are set
+** to NULL and 0, respectively. However, if the fts5_locale() function
+** was used to associate a locale with the value when it was inserted
+** into the fts5 table, then (*pzLocale) is set to point to a nul-terminated
+** buffer containing the name of the locale in utf-8 encoding. (*pnLocale)
+** is set to the size in bytes of the buffer, not including the
+** nul-terminator.
+**
+** If successful, SQLITE_OK is returned. Or, if an error occurs, an
+** SQLite error code is returned. The final value of the output parameters
+** is undefined in this case.
+**
+** xTokenize_v2:
+** Tokenize text using the tokenizer belonging to the FTS5 table. This
+** API is the same as the xTokenize() API, except that it allows a tokenizer
+** locale to be specified.
*/
struct Fts5ExtensionApi {
- int iVersion; /* Currently always set to 3 */
+ int iVersion; /* Currently always set to 4 */
void *(*xUserData)(Fts5Context*);
@@ -11778,7 +13567,7 @@ struct Fts5ExtensionApi {
int (*xRowCount)(Fts5Context*, sqlite3_int64 *pnRow);
int (*xColumnTotalSize)(Fts5Context*, int iCol, sqlite3_int64 *pnToken);
- int (*xTokenize)(Fts5Context*,
+ int (*xTokenize)(Fts5Context*,
const char *pText, int nText, /* Text to tokenize */
void *pCtx, /* Context passed to xToken() */
int (*xToken)(void*, int, const char*, int, int, int) /* Callback */
@@ -11805,17 +13594,33 @@ struct Fts5ExtensionApi {
int (*xPhraseFirstColumn)(Fts5Context*, int iPhrase, Fts5PhraseIter*, int*);
void (*xPhraseNextColumn)(Fts5Context*, Fts5PhraseIter*, int *piCol);
+
+ /* Below this point are iVersion>=3 only */
+ int (*xQueryToken)(Fts5Context*,
+ int iPhrase, int iToken,
+ const char **ppToken, int *pnToken
+ );
+ int (*xInstToken)(Fts5Context*, int iIdx, int iToken, const char**, int*);
+
+ /* Below this point are iVersion>=4 only */
+ int (*xColumnLocale)(Fts5Context*, int iCol, const char **pz, int *pn);
+ int (*xTokenize_v2)(Fts5Context*,
+ const char *pText, int nText, /* Text to tokenize */
+ const char *pLocale, int nLocale, /* Locale to pass to tokenizer */
+ void *pCtx, /* Context passed to xToken() */
+ int (*xToken)(void*, int, const char*, int, int, int) /* Callback */
+ );
};
-/*
+/*
** CUSTOM AUXILIARY FUNCTIONS
*************************************************************************/
/*************************************************************************
** CUSTOM TOKENIZERS
**
-** Applications may also register custom tokenizer types. A tokenizer
-** is registered by providing fts5 with a populated instance of the
+** Applications may also register custom tokenizer types. A tokenizer
+** is registered by providing fts5 with a populated instance of the
** following structure. All structure methods must be defined, setting
** any member of the fts5_tokenizer struct to NULL leads to undefined
** behaviour. The structure methods are expected to function as follows:
@@ -11825,17 +13630,17 @@ struct Fts5ExtensionApi {
** A tokenizer instance is required to actually tokenize text.
**
** The first argument passed to this function is a copy of the (void*)
-** pointer provided by the application when the fts5_tokenizer object
-** was registered with FTS5 (the third argument to xCreateTokenizer()).
+** pointer provided by the application when the fts5_tokenizer_v2 object
+** was registered with FTS5 (the third argument to xCreateTokenizer()).
** The second and third arguments are an array of nul-terminated strings
** containing the tokenizer arguments, if any, specified following the
** tokenizer name as part of the CREATE VIRTUAL TABLE statement used
** to create the FTS5 table.
**
-** The final argument is an output variable. If successful, (*ppOut)
+** The final argument is an output variable. If successful, (*ppOut)
** should be set to point to the new tokenizer handle and SQLITE_OK
** returned. If an error occurs, some value other than SQLITE_OK should
-** be returned. In this case, fts5 assumes that the final value of *ppOut
+** be returned. In this case, fts5 assumes that the final value of *ppOut
** is undefined.
**
** xDelete:
@@ -11844,12 +13649,12 @@ struct Fts5ExtensionApi {
** be invoked exactly once for each successful call to xCreate().
**
** xTokenize:
-** This function is expected to tokenize the nText byte string indicated
+** This function is expected to tokenize the nText byte string indicated
** by argument pText. pText may or may not be nul-terminated. The first
** argument passed to this function is a pointer to an Fts5Tokenizer object
** returned by an earlier call to xCreate().
**
-** The second argument indicates the reason that FTS5 is requesting
+** The third argument indicates the reason that FTS5 is requesting
** tokenization of the supplied text. This is always one of the following
** four values:
**
@@ -11858,8 +13663,8 @@ struct Fts5ExtensionApi {
** determine the set of tokens to add to (or delete from) the
** FTS index.
**
-** - FTS5_TOKENIZE_QUERY - A MATCH query is being executed
-** against the FTS index. The tokenizer is being called to tokenize
+**
- FTS5_TOKENIZE_QUERY - A MATCH query is being executed
+** against the FTS index. The tokenizer is being called to tokenize
** a bareword or quoted string specified as part of the query.
**
**
- (FTS5_TOKENIZE_QUERY | FTS5_TOKENIZE_PREFIX) - Same as
@@ -11867,12 +13672,19 @@ struct Fts5ExtensionApi {
** followed by a "*" character, indicating that the last token
** returned by the tokenizer will be treated as a token prefix.
**
-**
- FTS5_TOKENIZE_AUX - The tokenizer is being invoked to
+**
- FTS5_TOKENIZE_AUX - The tokenizer is being invoked to
** satisfy an fts5_api.xTokenize() request made by an auxiliary
** function. Or an fts5_api.xColumnSize() request made by the same
-** on a columnsize=0 database.
+** on a columnsize=0 database.
**
**
+** The sixth and seventh arguments passed to xTokenize() - pLocale and
+** nLocale - are a pointer to a buffer containing the locale to use for
+** tokenization (e.g. "en_US") and its size in bytes, respectively. The
+** pLocale buffer is not nul-terminated. pLocale may be passed NULL (in
+** which case nLocale is always 0) to indicate that the tokenizer should
+** use its default locale.
+**
** For each token in the input string, the supplied callback xToken() must
** be invoked. The first argument to it should be a copy of the pointer
** passed as the second argument to xTokenize(). The third and fourth
@@ -11882,10 +13694,10 @@ struct Fts5ExtensionApi {
** which the token is derived within the input.
**
** The second argument passed to the xToken() callback ("tflags") should
-** normally be set to 0. The exception is if the tokenizer supports
+** normally be set to 0. The exception is if the tokenizer supports
** synonyms. In this case see the discussion below for details.
**
-** FTS5 assumes the xToken() callback is invoked for each token in the
+** FTS5 assumes the xToken() callback is invoked for each token in the
** order that they occur within the input text.
**
** If an xToken() callback returns any value other than SQLITE_OK, then
@@ -11896,10 +13708,34 @@ struct Fts5ExtensionApi {
** may abandon the tokenization and return any error code other than
** SQLITE_OK or SQLITE_DONE.
**
+** If the tokenizer is registered using an fts5_tokenizer_v2 object,
+** then the xTokenize() method has two additional arguments - pLocale
+** and nLocale. These specify the locale that the tokenizer should use
+** for the current request. If pLocale and nLocale are both 0, then the
+** tokenizer should use its default locale. Otherwise, pLocale points to
+** an nLocale byte buffer containing the name of the locale to use as utf-8
+** text. pLocale is not nul-terminated.
+**
+** FTS5_TOKENIZER
+**
+** There is also an fts5_tokenizer object. This is an older, deprecated,
+** version of fts5_tokenizer_v2. It is similar except that:
+**
+**
+** - There is no "iVersion" field, and
+**
- The xTokenize() method does not take a locale argument.
+**
+**
+** Legacy fts5_tokenizer tokenizers must be registered using the
+** legacy xCreateTokenizer() function, instead of xCreateTokenizer_v2().
+**
+** Tokenizer implementations registered using either API may be retrieved
+** using both xFindTokenizer() and xFindTokenizer_v2().
+**
** SYNONYM SUPPORT
**
** Custom tokenizers may also support synonyms. Consider a case in which a
-** user wishes to query for a phrase such as "first place". Using the
+** user wishes to query for a phrase such as "first place". Using the
** built-in tokenizers, the FTS5 query 'first + place' will match instances
** of "first place" within the document set, but not alternative forms
** such as "1st place". In some applications, it would be better to match
@@ -11919,34 +13755,34 @@ struct Fts5ExtensionApi {
**
** - By querying the index for all synonyms of each query term
** separately. In this case, when tokenizing query text, the
-** tokenizer may provide multiple synonyms for a single term
-** within the document. FTS5 then queries the index for each
+** tokenizer may provide multiple synonyms for a single term
+** within the document. FTS5 then queries the index for each
** synonym individually. For example, faced with the query:
**
**
** ... MATCH 'first place'
**
** the tokenizer offers both "1st" and "first" as synonyms for the
-** first token in the MATCH query and FTS5 effectively runs a query
+** first token in the MATCH query and FTS5 effectively runs a query
** similar to:
**
**
** ... MATCH '(first OR 1st) place'
**
** except that, for the purposes of auxiliary functions, the query
-** still appears to contain just two phrases - "(first OR 1st)"
+** still appears to contain just two phrases - "(first OR 1st)"
** being treated as a single phrase.
**
**
- By adding multiple synonyms for a single term to the FTS index.
** Using this method, when tokenizing document text, the tokenizer
-** provides multiple synonyms for each token. So that when a
+** provides multiple synonyms for each token. So that when a
** document such as "I won first place" is tokenized, entries are
** added to the FTS index for "i", "won", "first", "1st" and
** "place".
**
** This way, even if the tokenizer does not provide synonyms
** when tokenizing query text (it should not - to do so would be
-** inefficient), it doesn't matter if the user queries for
+** inefficient), it doesn't matter if the user queries for
** 'first + place' or '1st + place', as there are entries in the
** FTS index corresponding to both forms of the first token.
**
@@ -11967,11 +13803,11 @@ struct Fts5ExtensionApi {
**
** It is an error to specify the FTS5_TOKEN_COLOCATED flag the first time
** xToken() is called. Multiple synonyms may be specified for a single token
-** by making multiple calls to xToken(FTS5_TOKEN_COLOCATED) in sequence.
+** by making multiple calls to xToken(FTS5_TOKEN_COLOCATED) in sequence.
** There is no limit to the number of synonyms that may be provided for a
** single token.
**
-** In many cases, method (1) above is the best approach. It does not add
+** In many cases, method (1) above is the best approach. It does not add
** extra data to the FTS index or require FTS5 to query for multiple terms,
** so it is efficient in terms of disk space and query speed. However, it
** does not support prefix queries very well. If, as suggested above, the
@@ -11983,35 +13819,62 @@ struct Fts5ExtensionApi {
** will not match documents that contain the token "1st" (as the tokenizer
** will probably not map "1s" to any prefix of "first").
**
-** For full prefix support, method (3) may be preferred. In this case,
+** For full prefix support, method (3) may be preferred. In this case,
** because the index contains entries for both "first" and "1st", prefix
** queries such as 'fi*' or '1s*' will match correctly. However, because
** extra entries are added to the FTS index, this method uses more space
** within the database.
**
** Method (2) offers a midpoint between (1) and (3). Using this method,
-** a query such as '1s*' will match documents that contain the literal
+** a query such as '1s*' will match documents that contain the literal
** token "1st", but not "first" (assuming the tokenizer is not able to
** provide synonyms for prefixes). However, a non-prefix query like '1st'
** will match against "1st" and "first". This method does not require
-** extra disk space, as no extra entries are added to the FTS index.
+** extra disk space, as no extra entries are added to the FTS index.
** On the other hand, it may require more CPU cycles to run MATCH queries,
** as separate queries of the FTS index are required for each synonym.
**
** When using methods (2) or (3), it is important that the tokenizer only
-** provide synonyms when tokenizing document text (method (2)) or query
-** text (method (3)), not both. Doing so will not cause any errors, but is
+** provide synonyms when tokenizing document text (method (3)) or query
+** text (method (2)), not both. Doing so will not cause any errors, but is
** inefficient.
*/
typedef struct Fts5Tokenizer Fts5Tokenizer;
+typedef struct fts5_tokenizer_v2 fts5_tokenizer_v2;
+struct fts5_tokenizer_v2 {
+ int iVersion; /* Currently always 2 */
+
+ int (*xCreate)(void*, const char **azArg, int nArg, Fts5Tokenizer **ppOut);
+ void (*xDelete)(Fts5Tokenizer*);
+ int (*xTokenize)(Fts5Tokenizer*,
+ void *pCtx,
+ int flags, /* Mask of FTS5_TOKENIZE_* flags */
+ const char *pText, int nText,
+ const char *pLocale, int nLocale,
+ int (*xToken)(
+ void *pCtx, /* Copy of 2nd argument to xTokenize() */
+ int tflags, /* Mask of FTS5_TOKEN_* flags */
+ const char *pToken, /* Pointer to buffer containing token */
+ int nToken, /* Size of token in bytes */
+ int iStart, /* Byte offset of token within input text */
+ int iEnd /* Byte offset of end of token within input text */
+ )
+ );
+};
+
+/*
+** New code should use the fts5_tokenizer_v2 type to define tokenizer
+** implementations. The following type is included for legacy applications
+** that still use it.
+*/
typedef struct fts5_tokenizer fts5_tokenizer;
struct fts5_tokenizer {
int (*xCreate)(void*, const char **azArg, int nArg, Fts5Tokenizer **ppOut);
void (*xDelete)(Fts5Tokenizer*);
- int (*xTokenize)(Fts5Tokenizer*,
+ int (*xTokenize)(Fts5Tokenizer*,
void *pCtx,
int flags, /* Mask of FTS5_TOKENIZE_* flags */
- const char *pText, int nText,
+ const char *pText, int nText,
int (*xToken)(
void *pCtx, /* Copy of 2nd argument to xTokenize() */
int tflags, /* Mask of FTS5_TOKEN_* flags */
@@ -12023,6 +13886,7 @@ struct fts5_tokenizer {
);
};
+
/* Flags that may be passed as the third argument to xTokenize() */
#define FTS5_TOKENIZE_QUERY 0x0001
#define FTS5_TOKENIZE_PREFIX 0x0002
@@ -12042,13 +13906,13 @@ struct fts5_tokenizer {
*/
typedef struct fts5_api fts5_api;
struct fts5_api {
- int iVersion; /* Currently always set to 2 */
+ int iVersion; /* Currently always set to 3 */
/* Create a new tokenizer */
int (*xCreateTokenizer)(
fts5_api *pApi,
const char *zName,
- void *pContext,
+ void *pUserData,
fts5_tokenizer *pTokenizer,
void (*xDestroy)(void*)
);
@@ -12057,7 +13921,7 @@ struct fts5_api {
int (*xFindTokenizer)(
fts5_api *pApi,
const char *zName,
- void **ppContext,
+ void **ppUserData,
fts5_tokenizer *pTokenizer
);
@@ -12065,10 +13929,29 @@ struct fts5_api {
int (*xCreateFunction)(
fts5_api *pApi,
const char *zName,
- void *pContext,
+ void *pUserData,
fts5_extension_function xFunction,
void (*xDestroy)(void*)
);
+
+ /* APIs below this point are only available if iVersion>=3 */
+
+ /* Create a new tokenizer */
+ int (*xCreateTokenizer_v2)(
+ fts5_api *pApi,
+ const char *zName,
+ void *pUserData,
+ fts5_tokenizer_v2 *pTokenizer,
+ void (*xDestroy)(void*)
+ );
+
+ /* Find an existing tokenizer */
+ int (*xFindTokenizer_v2)(
+ fts5_api *pApi,
+ const char *zName,
+ void **ppUserData,
+ fts5_tokenizer_v2 **ppTokenizer
+ );
};
/*
@@ -12082,3 +13965,4 @@ struct fts5_api {
#endif /* _FTS5_H */
/******** End of fts5.h *********/
+#endif /* SQLITE3_H */
diff --git a/sqllin-dsl-test/build.gradle.kts b/sqllin-dsl-test/build.gradle.kts
index b8a42bc..78bab53 100644
--- a/sqllin-dsl-test/build.gradle.kts
+++ b/sqllin-dsl-test/build.gradle.kts
@@ -83,6 +83,21 @@ kotlin {
}
}
+gradle.taskGraph.whenReady {
+ if (!project.hasProperty("onCICD"))
+ return@whenReady
+ tasks.forEach {
+ when {
+ it.name.contains("linux", true) -> it.enabled = HostManager.hostIsLinux
+ it.name.contains("mingw", true) -> it.enabled = HostManager.hostIsMingw
+ it.name.contains("ios", true)
+ || it.name.contains("macos", true)
+ || it.name.contains("watchos", true)
+ || it.name.contains("tvos", true) -> it.enabled = HostManager.hostIsMac
+ }
+ }
+}
+
android {
namespace = "com.ctrip.sqllin.dsl.test"
compileSdk = libs.versions.android.sdk.compile.get().toInt()
diff --git a/sqllin-dsl-test/src/androidInstrumentedTest/kotlin/com/ctrip/sqllin/dsl/test/AndroidTest.kt b/sqllin-dsl-test/src/androidInstrumentedTest/kotlin/com/ctrip/sqllin/dsl/test/AndroidTest.kt
index 220af63..edf9ab0 100644
--- a/sqllin-dsl-test/src/androidInstrumentedTest/kotlin/com/ctrip/sqllin/dsl/test/AndroidTest.kt
+++ b/sqllin-dsl-test/src/androidInstrumentedTest/kotlin/com/ctrip/sqllin/dsl/test/AndroidTest.kt
@@ -109,6 +109,48 @@ class AndroidTest {
@Test
fun testNotNullConstraint() = commonTest.testNotNullConstraint()
+ @Test
+ fun testStringAggregateFunctions() = commonTest.testStringAggregateFunctions()
+
+ @Test
+ fun testIndexOperations() = commonTest.testIndexOperations()
+
+ @Test
+ fun testBlobLengthFunction() = commonTest.testBlobLengthFunction()
+
+ @Test
+ fun testPragmaForeignKeys() = commonTest.testPragmaForeignKeys()
+
+ @Test
+ fun testForeignKeyCascadeDelete() = commonTest.testForeignKeyCascadeDelete()
+
+ @Test
+ fun testForeignKeySetNullDelete() = commonTest.testForeignKeySetNullDelete()
+
+ @Test
+ fun testForeignKeyRestrictDelete() = commonTest.testForeignKeyRestrictDelete()
+
+ @Test
+ fun testCompositeForeignKey() = commonTest.testCompositeForeignKey()
+
+ @Test
+ fun testMultipleForeignKeys() = commonTest.testMultipleForeignKeys()
+
+ @Test
+ fun testForeignKeyCreateSQL() = commonTest.testForeignKeyCreateSQL()
+
+ @Test
+ fun testForeignKeyWithoutPragma() = commonTest.testForeignKeyWithoutPragma()
+
+ @Test
+ fun testDefaultValuesCreateSQL() = commonTest.testDefaultValuesCreateSQL()
+
+ @Test
+ fun testDefaultValuesInsert() = commonTest.testDefaultValuesInsert()
+
+ @Test
+ fun testDefaultValuesWithForeignKey() = commonTest.testDefaultValuesWithForeignKey()
+
@Before
fun setUp() {
val context = InstrumentationRegistry.getInstrumentation().targetContext
diff --git a/sqllin-dsl-test/src/commonMain/kotlin/com/ctrip/sqllin/dsl/test/Entities.kt b/sqllin-dsl-test/src/commonMain/kotlin/com/ctrip/sqllin/dsl/test/Entities.kt
index bdef96e..da6cd9a 100644
--- a/sqllin-dsl-test/src/commonMain/kotlin/com/ctrip/sqllin/dsl/test/Entities.kt
+++ b/sqllin-dsl-test/src/commonMain/kotlin/com/ctrip/sqllin/dsl/test/Entities.kt
@@ -260,4 +260,187 @@ data class CombinedConstraintsTest(
@Unique @CollateNoCase val code: String,
@Unique val serial: String,
val value: Int,
+)
+
+/**
+ * Foreign Key Test Entities
+ */
+
+/**
+ * Parent table for testing @References annotation
+ */
+@DBRow("fk_user")
+@Serializable
+data class FKUser(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @Unique val email: String,
+ val name: String,
+)
+
+/**
+ * Child table with CASCADE delete using @References
+ */
+@DBRow("fk_order")
+@Serializable
+data class FKOrder(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @com.ctrip.sqllin.dsl.annotation.References(
+ tableName = "fk_user",
+ foreignKeys = ["id"],
+ trigger = com.ctrip.sqllin.dsl.annotation.Trigger.ON_DELETE_CASCADE
+ )
+ val userId: Long,
+ val amount: Double,
+ val orderDate: String,
+)
+
+/**
+ * Child table with SET_NULL delete using @References
+ */
+@DBRow("fk_post")
+@Serializable
+data class FKPost(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @com.ctrip.sqllin.dsl.annotation.References(
+ tableName = "fk_user",
+ foreignKeys = ["id"],
+ trigger = com.ctrip.sqllin.dsl.annotation.Trigger.ON_DELETE_SET_NULL
+ )
+ val authorId: Long?,
+ val title: String,
+ val content: String,
+)
+
+/**
+ * Child table with RESTRICT delete using @References
+ */
+@DBRow("fk_profile")
+@Serializable
+data class FKProfile(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @com.ctrip.sqllin.dsl.annotation.References(
+ tableName = "fk_user",
+ foreignKeys = ["id"],
+ trigger = com.ctrip.sqllin.dsl.annotation.Trigger.ON_DELETE_RESTRICT
+ )
+ val userId: Long,
+ val bio: String,
+ val website: String?,
+)
+
+/**
+ * Parent table with composite primary key for testing composite foreign keys
+ */
+@DBRow("fk_product")
+@Serializable
+data class FKProduct(
+ @CompositePrimaryKey val categoryId: Int,
+ @CompositePrimaryKey val productCode: String,
+ val name: String,
+ val price: Double,
+)
+
+/**
+ * Child table with composite foreign key using @ForeignKeyGroup and @ForeignKey annotations
+ */
+@DBRow("fk_order_item")
+@Serializable
+@com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup(
+ group = 0,
+ tableName = "fk_product",
+ trigger = com.ctrip.sqllin.dsl.annotation.Trigger.ON_DELETE_CASCADE
+)
+data class FKOrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @com.ctrip.sqllin.dsl.annotation.ForeignKey(group = 0, reference = "categoryId")
+ val productCategory: Int,
+ @com.ctrip.sqllin.dsl.annotation.ForeignKey(group = 0, reference = "productCode")
+ val productCode: String,
+ val quantity: Int,
+ val subtotal: Double,
+)
+
+/**
+ * Table with multiple foreign keys to different tables
+ */
+@DBRow("fk_comment")
+@Serializable
+data class FKComment(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @com.ctrip.sqllin.dsl.annotation.References(
+ tableName = "fk_user",
+ foreignKeys = ["id"],
+ trigger = com.ctrip.sqllin.dsl.annotation.Trigger.ON_DELETE_CASCADE
+ )
+ val authorId: Long,
+ @com.ctrip.sqllin.dsl.annotation.References(
+ tableName = "fk_post",
+ foreignKeys = ["id"],
+ trigger = com.ctrip.sqllin.dsl.annotation.Trigger.ON_DELETE_CASCADE
+ )
+ val postId: Long,
+ val content: String,
+ val createdAt: String,
+)
+
+/**
+ * Default Values Test Entities
+ */
+
+/**
+ * Test entity for @Default annotation with basic types
+ * Tests default values for String, Int, Boolean, and SQLite functions
+ */
+@DBRow("default_values_test")
+@Serializable
+data class DefaultValuesTest(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ val name: String,
+ @com.ctrip.sqllin.dsl.annotation.Default("'active'") val status: String,
+ @com.ctrip.sqllin.dsl.annotation.Default("0") val loginCount: Int,
+ @com.ctrip.sqllin.dsl.annotation.Default("1") val isEnabled: Boolean,
+ @com.ctrip.sqllin.dsl.annotation.Default("CURRENT_TIMESTAMP") val createdAt: String,
+)
+
+/**
+ * Test entity for @Default annotation with nullable types
+ * Tests default values on nullable columns
+ */
+@DBRow("default_nullable_test")
+@Serializable
+data class DefaultNullableTest(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ val name: String,
+ @com.ctrip.sqllin.dsl.annotation.Default("'In Stock'") val availability: String?,
+ @com.ctrip.sqllin.dsl.annotation.Default("100") val quantity: Int?,
+ @com.ctrip.sqllin.dsl.annotation.Default("0.0") val discount: Double?,
+)
+
+/**
+ * Parent table for testing @Default with foreign key SET_DEFAULT trigger
+ */
+@DBRow("default_fk_parent")
+@Serializable
+data class DefaultFKParent(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ val name: String,
+)
+
+/**
+ * Child table with @Default and foreign key SET_DEFAULT trigger
+ * Tests that default values work with ON_DELETE_SET_DEFAULT
+ */
+@DBRow("default_fk_child")
+@Serializable
+@com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup(
+ group = 0,
+ tableName = "default_fk_parent",
+ trigger = com.ctrip.sqllin.dsl.annotation.Trigger.ON_DELETE_SET_DEFAULT
+)
+data class DefaultFKChild(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @com.ctrip.sqllin.dsl.annotation.ForeignKey(group = 0, reference = "id")
+ @com.ctrip.sqllin.dsl.annotation.Default("0")
+ val parentId: Long,
+ val description: String,
)
\ No newline at end of file
diff --git a/sqllin-dsl-test/src/commonTest/kotlin/com/ctrip/sqllin/dsl/test/CommonBasicTest.kt b/sqllin-dsl-test/src/commonTest/kotlin/com/ctrip/sqllin/dsl/test/CommonBasicTest.kt
index 0cec90b..56b8a41 100644
--- a/sqllin-dsl-test/src/commonTest/kotlin/com/ctrip/sqllin/dsl/test/CommonBasicTest.kt
+++ b/sqllin-dsl-test/src/commonTest/kotlin/com/ctrip/sqllin/dsl/test/CommonBasicTest.kt
@@ -307,6 +307,15 @@ class CommonBasicTest(private val path: DatabasePath) {
var selectStatement6: SelectStatement? = null
var selectStatement7: SelectStatement? = null
var selectStatement8: SelectStatement? = null
+ var selectStatement9: SelectStatement? = null
+ // var selectStatement10: SelectStatement? = null
+ var selectStatement11: SelectStatement? = null
+ var selectStatement12: SelectStatement? = null
+ var selectStatement13: SelectStatement? = null
+ var selectStatement14: SelectStatement? = null
+ var selectStatement15: SelectStatement? = null
+ var selectStatement16: SelectStatement? = null
+ var selectStatement17: SelectStatement? = null
database {
BookTable { table ->
table INSERT listOf(book0, book1, book2, book3, book4)
@@ -319,6 +328,19 @@ class CommonBasicTest(private val path: DatabasePath) {
selectStatement6 = table SELECT GROUP_BY (author) HAVING (min(price) LT 17)
selectStatement7 = table SELECT GROUP_BY (author) HAVING (avg(pages) LT 400)
selectStatement8 = table SELECT GROUP_BY (author) HAVING (sum(pages) LTE 970)
+ // New functions: round, sign
+ selectStatement9 = table SELECT WHERE(round(price, 0) EQ 17.0)
+ // selectStatement10 = table SELECT WHERE(sign(pages) EQ 1)
+ // New string functions: substr, trim, ltrim, rtrim
+ selectStatement11 = table SELECT WHERE(substr(name, 1, 6) EQ "Kotlin")
+ selectStatement12 = table SELECT WHERE(trim(name) EQ "Kotlin Cookbook")
+ selectStatement13 = table SELECT WHERE(ltrim(name) EQ "Kotlin Cookbook")
+ selectStatement14 = table SELECT WHERE(rtrim(name) EQ "Kotlin Cookbook")
+ // New string functions: replace, instr
+ selectStatement15 = table SELECT WHERE(instr(name, "Kotlin") GT 0)
+ selectStatement16 = table SELECT WHERE(replace(author, "Brown", "Smith") EQ "Dan Smith")
+ // Test random function (just check it returns results)
+ selectStatement17 = table SELECT ORDER_BY(random()) LIMIT 3
}
}
assertEquals(book1, selectStatement0?.getResults()?.first())
@@ -330,6 +352,16 @@ class CommonBasicTest(private val path: DatabasePath) {
assertEquals(book0.author, selectStatement6?.getResults()?.first()?.author)
assertEquals(book4.author, selectStatement7?.getResults()?.first()?.author)
assertEquals(book0.author, selectStatement8?.getResults()?.first()?.author)
+ // Verify new functions
+ assertEquals(book0, selectStatement9?.getResults()?.first())
+ // assertEquals(5, selectStatement10?.getResults()?.size) // All books have positive pages
+ assertEquals(true, selectStatement11?.getResults()?.size == 2) // Kotlin Cookbook and Kotlin Guide Pratique
+ assertEquals(book1, selectStatement12?.getResults()?.first())
+ assertEquals(book1, selectStatement13?.getResults()?.first())
+ assertEquals(book1, selectStatement14?.getResults()?.first())
+ assertEquals(true, selectStatement15?.getResults()?.size == 2) // Books with "Kotlin" in name
+ assertEquals(book0, selectStatement16?.getResults()?.first())
+ assertEquals(3, selectStatement17?.getResults()?.size) // Random ordering, but should return 3 results
}
fun testJoinClause() = Database(getDefaultDBConfig(), true).databaseAutoClose { database ->
@@ -1513,6 +1545,176 @@ class CommonBasicTest(private val path: DatabasePath) {
assertEquals(false, remainingUsers.any { it.status == UserStatus.BANNED })
}
+ /**
+ * Test for new SQL string aggregate and formatting functions
+ * Tests group_concat and printf functions
+ */
+ fun testStringAggregateFunctions() = Database(getNewAPIDBConfig(), true).databaseAutoClose { database ->
+ // Clear and insert test data
+ val book0 = Book(name = "Book A", author = "Author X", pages = 100, price = 10.50)
+ val book1 = Book(name = "Book B", author = "Author X", pages = 200, price = 20.99)
+ val book2 = Book(name = "Book C", author = "Author Y", pages = 300, price = 30.00)
+
+ database {
+ BookTable { table ->
+ table DELETE X
+ table INSERT listOf(book0, book1, book2)
+ }
+ }
+
+ // Test group_concat - concatenate book names by author
+ var groupConcatStatement: SelectStatement? = null
+ database {
+ BookTable { table ->
+ groupConcatStatement = table SELECT GROUP_BY(author) HAVING (group_concat(name, ",") LIKE "%Book A%")
+ }
+ }
+ assertEquals(1, groupConcatStatement?.getResults()?.size)
+ assertEquals("Author X", groupConcatStatement?.getResults()?.first()?.author)
+
+ // Test printf - format price as currency
+ var printfStatement: SelectStatement? = null
+ database {
+ BookTable { table ->
+ printfStatement = table SELECT WHERE (printf("%.2f", name) LIKE "%.2f")
+ }
+ }
+ // Printf formats the value, we're just checking it can be used in queries
+ assertNotEquals(null, printfStatement?.getResults())
+ }
+
+ /**
+ * Test for CREATE_INDEX and CREATE_UNIQUE_INDEX operations
+ * Verifies index creation functionality
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testIndexOperations() = Database(getNewAPIDBConfig(), true).databaseAutoClose { database ->
+ // Test 1: CREATE_INDEX on single column
+ database {
+ BookTable.CREATE_INDEX("idx_book_name", BookTable.name)
+ }
+
+ // Verify index was created by inserting data and querying
+ val book1 = Book(name = "Test Book 1", author = "Author 1", pages = 100, price = 10.99)
+ val book2 = Book(name = "Test Book 2", author = "Author 2", pages = 200, price = 20.99)
+ database {
+ BookTable { table ->
+ table INSERT listOf(book1, book2)
+ }
+ }
+
+ lateinit var selectStatement: SelectStatement
+ database {
+ selectStatement = BookTable SELECT WHERE (BookTable.name EQ "Test Book 1")
+ }
+ assertEquals(1, selectStatement.getResults().size)
+ assertEquals(book1.name, selectStatement.getResults().first().name)
+
+ // Test 2: CREATE_INDEX on multiple columns
+ database {
+ PersonWithIdTable.CREATE_INDEX("idx_person_name_age", PersonWithIdTable.name, PersonWithIdTable.age)
+ }
+
+ val person1 = PersonWithId(id = null, name = "Alice", age = 25)
+ val person2 = PersonWithId(id = null, name = "Bob", age = 30)
+ database {
+ PersonWithIdTable { table ->
+ table INSERT listOf(person1, person2)
+ }
+ }
+
+ lateinit var personStatement: SelectStatement
+ database {
+ personStatement = PersonWithIdTable SELECT WHERE (PersonWithIdTable.name EQ "Alice" AND (PersonWithIdTable.age EQ 25))
+ }
+ assertEquals(1, personStatement.getResults().size)
+ assertEquals("Alice", personStatement.getResults().first().name)
+
+ // Test 3: CREATE_UNIQUE_INDEX - should enforce uniqueness
+ database {
+ ProductTable.CREATE_UNIQUE_INDEX("idx_unique_product_name", ProductTable.name)
+ }
+
+ val product1 = Product(sku = null, name = "Widget", price = 19.99)
+ database {
+ ProductTable { table ->
+ table INSERT product1
+ }
+ }
+
+ // Try to insert duplicate - should fail
+ val product2 = Product(sku = null, name = "Widget", price = 29.99)
+ var duplicateFailed = false
+ try {
+ database {
+ ProductTable { table ->
+ table INSERT product2
+ }
+ }
+ } catch (e: Exception) {
+ duplicateFailed = true
+ }
+ assertEquals(true, duplicateFailed, "Duplicate value should violate unique index")
+
+ // Test 4: Verify empty columns parameter throws exception
+ var emptyColumnsFailed = false
+ try {
+ database {
+ BookTable.CREATE_INDEX("idx_empty")
+ }
+ } catch (e: IllegalArgumentException) {
+ emptyColumnsFailed = true
+ }
+ assertEquals(true, emptyColumnsFailed, "CREATE_INDEX with no columns should throw IllegalArgumentException")
+
+ // Test 5: CREATE_UNIQUE_INDEX with empty columns should also fail
+ var emptyUniqueColumnsFailed = false
+ try {
+ database {
+ BookTable.CREATE_UNIQUE_INDEX("idx_unique_empty")
+ }
+ } catch (e: IllegalArgumentException) {
+ emptyUniqueColumnsFailed = true
+ }
+ assertEquals(true, emptyUniqueColumnsFailed, "CREATE_UNIQUE_INDEX with no columns should throw IllegalArgumentException")
+ }
+
+ /**
+ * Test for length function with BLOB type
+ * Verifies length() works with ClauseBlob parameter
+ */
+ fun testBlobLengthFunction() = Database(getNewAPIDBConfig(), true).databaseAutoClose { database ->
+ val file1 = FileData(id = null, fileName = "small.bin", content = byteArrayOf(0x01, 0x02, 0x03), metadata = "Small file")
+ val file2 = FileData(id = null, fileName = "large.bin", content = ByteArray(100) { it.toByte() }, metadata = "Large file")
+
+ database {
+ FileDataTable { table ->
+ table DELETE X
+ table INSERT listOf(file1, file2)
+ }
+ }
+
+ // Test length function with BLOB
+ var lengthStatement: SelectStatement? = null
+ database {
+ FileDataTable { table ->
+ lengthStatement = table SELECT WHERE (length(content) EQ 3)
+ }
+ }
+ assertEquals(1, lengthStatement?.getResults()?.size)
+ assertEquals("small.bin", lengthStatement?.getResults()?.first()?.fileName)
+
+ // Test length with GT operator
+ var lengthGTStatement: SelectStatement? = null
+ database {
+ FileDataTable { table ->
+ lengthGTStatement = table SELECT WHERE (length(content) GT 10)
+ }
+ }
+ assertEquals(1, lengthGTStatement?.getResults()?.size)
+ assertEquals("large.bin", lengthGTStatement?.getResults()?.first()?.fileName)
+ }
+
/**
* Test for compile-time CREATE TABLE generation
* Verifies that createSQL property contains the correct SQL statement
@@ -1948,6 +2150,628 @@ class CommonBasicTest(private val path: DatabasePath) {
}
}
+ /**
+ * Test PRAGMA_FOREIGN_KEYS function
+ * Verifies foreign key enforcement can be enabled/disabled
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testPragmaForeignKeys() {
+ Database(getForeignKeyDBConfig(), true).databaseAutoClose { database ->
+ // Test 1: Enable foreign keys
+ database {
+ PRAGMA_FOREIGN_KEYS(true)
+ }
+
+ // Test 2: Insert parent record
+ val user = FKUser(id = null, email = "test@example.com", name = "Test User")
+ database {
+ FKUserTable { table ->
+ table INSERT user
+ }
+ }
+
+ // Test 3: Insert child record with valid foreign key - should succeed
+ val order = FKOrder(id = null, userId = 1L, amount = 99.99, orderDate = "2025-01-15")
+ database {
+ FKOrderTable { table ->
+ table INSERT order
+ }
+ }
+
+ lateinit var selectStatement: SelectStatement
+ database {
+ selectStatement = FKOrderTable SELECT X
+ }
+ assertEquals(1, selectStatement.getResults().size)
+
+ // Test 4: Try to insert child with invalid foreign key - should fail
+ val invalidOrder = FKOrder(id = null, userId = 999L, amount = 50.0, orderDate = "2025-01-15")
+ var foreignKeyViolated = false
+ try {
+ database {
+ FKOrderTable { table ->
+ table INSERT invalidOrder
+ }
+ }
+ } catch (e: Exception) {
+ foreignKeyViolated = true
+ }
+ assertEquals(true, foreignKeyViolated, "Insert with invalid foreign key should fail when enforcement is enabled")
+ }
+ }
+
+ /**
+ * Test CASCADE delete behavior with @References
+ * Verifies that child rows are automatically deleted when parent is deleted
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testForeignKeyCascadeDelete() {
+ Database(getForeignKeyDBConfig(), true).databaseAutoClose { database ->
+ // Enable foreign keys
+ database {
+ PRAGMA_FOREIGN_KEYS(true)
+ }
+
+ // Insert parent user
+ val user1 = FKUser(id = null, email = "alice@example.com", name = "Alice")
+ val user2 = FKUser(id = null, email = "bob@example.com", name = "Bob")
+ database {
+ FKUserTable { table ->
+ table INSERT listOf(user1, user2)
+ }
+ }
+
+ // Insert orders for both users
+ val order1 = FKOrder(id = null, userId = 1L, amount = 99.99, orderDate = "2025-01-15")
+ val order2 = FKOrder(id = null, userId = 1L, amount = 49.99, orderDate = "2025-01-16")
+ val order3 = FKOrder(id = null, userId = 2L, amount = 29.99, orderDate = "2025-01-17")
+ database {
+ FKOrderTable { table ->
+ table INSERT listOf(order1, order2, order3)
+ }
+ }
+
+ // Verify orders exist
+ lateinit var selectOrders: SelectStatement
+ database {
+ selectOrders = FKOrderTable SELECT X
+ }
+ assertEquals(3, selectOrders.getResults().size)
+
+ // Delete user 1 - should CASCADE delete their orders
+ database {
+ FKUserTable { table ->
+ table DELETE WHERE (table.id EQ 1L)
+ }
+ }
+
+ // Verify user 1's orders are deleted
+ database {
+ selectOrders = FKOrderTable SELECT X
+ }
+ val remainingOrders = selectOrders.getResults()
+ assertEquals(1, remainingOrders.size)
+ assertEquals(2L, remainingOrders[0].userId)
+
+ // Verify user 2 still exists
+ lateinit var selectUsers: SelectStatement
+ database {
+ selectUsers = FKUserTable SELECT X
+ }
+ assertEquals(1, selectUsers.getResults().size)
+ assertEquals("Bob", selectUsers.getResults()[0].name)
+ }
+ }
+
+ /**
+ * Test SET_NULL delete behavior with @References
+ * Verifies that child foreign keys are set to NULL when parent is deleted
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testForeignKeySetNullDelete() {
+ Database(getForeignKeyDBConfig(), true).databaseAutoClose { database ->
+ // Enable foreign keys
+ database {
+ PRAGMA_FOREIGN_KEYS(true)
+ }
+
+ // Insert parent users
+ val user = FKUser(id = null, email = "author@example.com", name = "Author")
+ database {
+ FKUserTable { table ->
+ table INSERT user
+ }
+ }
+
+ // Insert posts by the user
+ val post1 = FKPost(id = null, authorId = 1L, title = "First Post", content = "Content 1")
+ val post2 = FKPost(id = null, authorId = 1L, title = "Second Post", content = "Content 2")
+ database {
+ FKPostTable { table ->
+ table INSERT listOf(post1, post2)
+ }
+ }
+
+ // Verify posts exist with author
+ lateinit var selectPosts: SelectStatement
+ database {
+ selectPosts = FKPostTable SELECT X
+ }
+ val posts = selectPosts.getResults()
+ assertEquals(2, posts.size)
+ assertEquals(1L, posts[0].authorId)
+ assertEquals(1L, posts[1].authorId)
+
+ // Delete the user - should SET_NULL on authorId
+ database {
+ FKUserTable { table ->
+ table DELETE WHERE (table.id EQ 1L)
+ }
+ }
+
+ // Verify posts still exist but authorId is NULL
+ database {
+ selectPosts = FKPostTable SELECT X
+ }
+ val remainingPosts = selectPosts.getResults()
+ assertEquals(2, remainingPosts.size)
+ assertEquals(null, remainingPosts[0].authorId)
+ assertEquals(null, remainingPosts[1].authorId)
+ assertEquals("First Post", remainingPosts[0].title)
+ assertEquals("Second Post", remainingPosts[1].title)
+ }
+ }
+
+ /**
+ * Test RESTRICT delete behavior with @References
+ * Verifies that parent deletion is prevented when child rows exist
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testForeignKeyRestrictDelete() {
+ Database(getForeignKeyDBConfig(), true).databaseAutoClose { database ->
+ // Enable foreign keys
+ database {
+ PRAGMA_FOREIGN_KEYS(true)
+ }
+
+ // Insert parent user
+ val user = FKUser(id = null, email = "user@example.com", name = "User")
+ database {
+ FKUserTable { table ->
+ table INSERT user
+ }
+ }
+
+ // Insert profile for the user
+ val profile = FKProfile(id = null, userId = 1L, bio = "User bio", website = "https://example.com")
+ database {
+ FKProfileTable { table ->
+ table INSERT profile
+ }
+ }
+
+ // Try to delete user - should fail due to RESTRICT
+ var deleteFailed = false
+ try {
+ database {
+ FKUserTable { table ->
+ table DELETE WHERE (table.id EQ 1L)
+ }
+ }
+ } catch (e: Exception) {
+ deleteFailed = true
+ }
+ assertEquals(true, deleteFailed, "Delete should fail with RESTRICT when child rows exist")
+
+ // Verify user still exists
+ lateinit var selectUsers: SelectStatement
+ database {
+ selectUsers = FKUserTable SELECT X
+ }
+ assertEquals(1, selectUsers.getResults().size)
+
+ // Delete the profile first
+ database {
+ FKProfileTable { table ->
+ table DELETE WHERE (table.userId EQ 1L)
+ }
+ }
+
+ // Now deleting user should succeed
+ database {
+ FKUserTable { table ->
+ table DELETE WHERE (table.id EQ 1L)
+ }
+ }
+
+ // Verify user is deleted
+ database {
+ selectUsers = FKUserTable SELECT X
+ }
+ assertEquals(0, selectUsers.getResults().size)
+ }
+ }
+
+ /**
+ * Test composite foreign keys with @ForeignKey annotation
+ * Verifies multi-column foreign key constraints work correctly
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testCompositeForeignKey() {
+ Database(getForeignKeyDBConfig(), true).databaseAutoClose { database ->
+ // Enable foreign keys
+ database {
+ PRAGMA_FOREIGN_KEYS(true)
+ }
+
+ // Insert parent products with composite primary key
+ val product1 = FKProduct(categoryId = 1, productCode = "P001", name = "Widget", price = 19.99)
+ val product2 = FKProduct(categoryId = 1, productCode = "P002", name = "Gadget", price = 29.99)
+ val product3 = FKProduct(categoryId = 2, productCode = "P001", name = "Tool", price = 39.99)
+ database {
+ FKProductTable { table ->
+ table INSERT listOf(product1, product2, product3)
+ }
+ }
+
+ // Insert order items with valid composite foreign keys - should succeed
+ val item1 = FKOrderItem(id = null, productCategory = 1, productCode = "P001", quantity = 2, subtotal = 39.98)
+ val item2 = FKOrderItem(id = null, productCategory = 2, productCode = "P001", quantity = 1, subtotal = 39.99)
+ database {
+ FKOrderItemTable { table ->
+ table INSERT listOf(item1, item2)
+ }
+ }
+
+ lateinit var selectItems: SelectStatement
+ database {
+ selectItems = FKOrderItemTable SELECT X
+ }
+ assertEquals(2, selectItems.getResults().size)
+
+ // Try to insert with invalid composite foreign key - should fail
+ val invalidItem = FKOrderItem(id = null, productCategory = 1, productCode = "P999", quantity = 1, subtotal = 10.0)
+ var foreignKeyViolated = false
+ try {
+ database {
+ FKOrderItemTable { table ->
+ table INSERT invalidItem
+ }
+ }
+ } catch (e: Exception) {
+ foreignKeyViolated = true
+ }
+ assertEquals(true, foreignKeyViolated, "Insert with invalid composite foreign key should fail")
+
+ // Delete product (1, P001) - should CASCADE delete item1
+ database {
+ FKProductTable { table ->
+ table DELETE WHERE ((table.categoryId EQ 1) AND (table.productCode EQ "P001"))
+ }
+ }
+
+ // Verify item1 is deleted
+ database {
+ selectItems = FKOrderItemTable SELECT X
+ }
+ val remainingItems = selectItems.getResults()
+ assertEquals(1, remainingItems.size)
+ assertEquals(2, remainingItems[0].productCategory)
+ }
+ }
+
+ /**
+ * Test multiple foreign keys to different tables
+ * Verifies a table can have foreign keys to multiple parent tables
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testMultipleForeignKeys() {
+ Database(getForeignKeyDBConfig(), true).databaseAutoClose { database ->
+ // Enable foreign keys
+ database {
+ PRAGMA_FOREIGN_KEYS(true)
+ }
+
+ // Insert parent user and post
+ val user = FKUser(id = null, email = "commenter@example.com", name = "Commenter")
+ database {
+ FKUserTable { table ->
+ table INSERT user
+ }
+ }
+
+ val post = FKPost(id = null, authorId = 1L, title = "Test Post", content = "Post content")
+ database {
+ FKPostTable { table ->
+ table INSERT post
+ }
+ }
+
+ // Insert comment with both foreign keys - should succeed
+ val comment = FKComment(id = null, authorId = 1L, postId = 1L, content = "Great post!", createdAt = "2025-01-15")
+ database {
+ FKCommentTable { table ->
+ table INSERT comment
+ }
+ }
+
+ lateinit var selectComments: SelectStatement
+ database {
+ selectComments = FKCommentTable SELECT X
+ }
+ assertEquals(1, selectComments.getResults().size)
+
+ // Try to insert with invalid user foreign key - should fail
+ val invalidComment1 = FKComment(id = null, authorId = 999L, postId = 1L, content = "Comment", createdAt = "2025-01-15")
+ var userFKViolated = false
+ try {
+ database {
+ FKCommentTable { table ->
+ table INSERT invalidComment1
+ }
+ }
+ } catch (e: Exception) {
+ userFKViolated = true
+ }
+ assertEquals(true, userFKViolated, "Insert with invalid user foreign key should fail")
+
+ // Try to insert with invalid post foreign key - should fail
+ val invalidComment2 = FKComment(id = null, authorId = 1L, postId = 999L, content = "Comment", createdAt = "2025-01-15")
+ var postFKViolated = false
+ try {
+ database {
+ FKCommentTable { table ->
+ table INSERT invalidComment2
+ }
+ }
+ } catch (e: Exception) {
+ postFKViolated = true
+ }
+ assertEquals(true, postFKViolated, "Insert with invalid post foreign key should fail")
+
+ // Delete user - should CASCADE delete comment
+ database {
+ FKUserTable { table ->
+ table DELETE WHERE (table.id EQ 1L)
+ }
+ }
+
+ // Verify comment is deleted
+ database {
+ selectComments = FKCommentTable SELECT X
+ }
+ assertEquals(0, selectComments.getResults().size)
+ }
+ }
+
+ /**
+ * Test CREATE SQL generation for foreign keys
+ * Verifies that foreign key constraints are correctly included in CREATE SQL
+ */
+ fun testForeignKeyCreateSQL() {
+ // Test 1: Simple foreign key with @References
+ val orderSQL = FKOrderTable.createSQL
+ assertEquals(true, orderSQL.contains("REFERENCES fk_user(id)"))
+ assertEquals(true, orderSQL.contains("ON DELETE CASCADE"))
+
+ // Test 2: SET_NULL trigger
+ val postSQL = FKPostTable.createSQL
+ assertEquals(true, postSQL.contains("REFERENCES fk_user(id)"))
+ assertEquals(true, postSQL.contains("ON DELETE SET NULL"))
+
+ // Test 3: RESTRICT trigger
+ val profileSQL = FKProfileTable.createSQL
+ assertEquals(true, profileSQL.contains("REFERENCES fk_user(id)"))
+ assertEquals(true, profileSQL.contains("ON DELETE RESTRICT"))
+
+ // Test 4: Composite foreign key with @ForeignKey
+ val orderItemSQL = FKOrderItemTable.createSQL
+ assertEquals(true, orderItemSQL.contains("FOREIGN KEY"))
+ assertEquals(true, orderItemSQL.contains("REFERENCES fk_product"))
+ assertEquals(true, orderItemSQL.contains("ON DELETE CASCADE"))
+
+ // Test 5: Multiple foreign keys
+ val commentSQL = FKCommentTable.createSQL
+ assertEquals(true, commentSQL.contains("REFERENCES fk_user(id)"))
+ assertEquals(true, commentSQL.contains("REFERENCES fk_post(id)"))
+ }
+
+ /**
+ * Test foreign key constraint without PRAGMA_FOREIGN_KEYS enabled
+ * Verifies that constraints are not enforced when PRAGMA is not enabled
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testForeignKeyWithoutPragma() {
+ Database(getForeignKeyDBConfig(), true).databaseAutoClose { database ->
+ // Note: NOT enabling PRAGMA_FOREIGN_KEYS
+
+ // Insert parent user
+ val user = FKUser(id = null, email = "test@example.com", name = "Test")
+ database {
+ FKUserTable { table ->
+ table INSERT user
+ }
+ }
+
+ // Insert order with INVALID foreign key - should succeed without enforcement
+ val invalidOrder = FKOrder(id = null, userId = 999L, amount = 99.99, orderDate = "2025-01-15")
+ database {
+ FKOrderTable { table ->
+ table INSERT invalidOrder
+ }
+ }
+
+ // Verify order was inserted despite invalid foreign key
+ lateinit var selectOrders: SelectStatement
+ database {
+ selectOrders = FKOrderTable SELECT X
+ }
+ assertEquals(1, selectOrders.getResults().size)
+ assertEquals(999L, selectOrders.getResults()[0].userId)
+ }
+ }
+
+ /**
+ * Test for @Default annotation - CREATE SQL generation
+ * Verifies that createSQL property contains the DEFAULT clause
+ */
+ fun testDefaultValuesCreateSQL() {
+ // Test 1: Basic default values
+ val defaultValuesSQL = DefaultValuesTestTable.createSQL
+ assertEquals(true, defaultValuesSQL.contains("CREATE TABLE default_values_test"))
+ assertEquals(true, defaultValuesSQL.contains("status TEXT NOT NULL DEFAULT 'active'"))
+ assertEquals(true, defaultValuesSQL.contains("loginCount INT NOT NULL DEFAULT 0"))
+ assertEquals(true, defaultValuesSQL.contains("isEnabled BOOLEAN NOT NULL DEFAULT 1"))
+ assertEquals(true, defaultValuesSQL.contains("createdAt TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP"))
+
+ // Test 2: Nullable columns with default values
+ val defaultNullableSQL = DefaultNullableTestTable.createSQL
+ assertEquals(true, defaultNullableSQL.contains("CREATE TABLE default_nullable_test"))
+ assertEquals(true, defaultNullableSQL.contains("availability TEXT DEFAULT 'In Stock'"))
+ assertEquals(true, defaultNullableSQL.contains("quantity INT DEFAULT 100"))
+ assertEquals(true, defaultNullableSQL.contains("discount DOUBLE DEFAULT 0.0"))
+
+ // Test 3: Default values with foreign key SET_DEFAULT trigger
+ val defaultFKChildSQL = DefaultFKChildTable.createSQL
+ assertEquals(true, defaultFKChildSQL.contains("CREATE TABLE default_fk_child"))
+ assertEquals(true, defaultFKChildSQL.contains("parentId BIGINT NOT NULL DEFAULT 0"))
+ assertEquals(true, defaultFKChildSQL.contains("FOREIGN KEY (parentId) REFERENCES default_fk_parent(id) ON DELETE SET DEFAULT"))
+ }
+
+ /**
+ * Test for @Default annotation - INSERT behavior
+ * Verifies that default values are used when columns are omitted in INSERT
+ * Note: SQLlin's INSERT operation always provides all column values from data classes,
+ * so this test verifies the schema is correctly generated with DEFAULT clauses
+ */
+ fun testDefaultValuesInsert() {
+ val config = DSLDBConfiguration(
+ name = DATABASE_NAME,
+ path = path,
+ version = 1,
+ create = {
+ CREATE(DefaultValuesTestTable)
+ CREATE(DefaultNullableTestTable)
+ }
+ )
+ Database(config, true).databaseAutoClose { database ->
+ // Test 1: Verify CREATE SQL contains DEFAULT clauses
+ val createSQL = DefaultValuesTestTable.createSQL
+ assertEquals(true, createSQL.contains("DEFAULT 'active'"))
+ assertEquals(true, createSQL.contains("DEFAULT 0"))
+ assertEquals(true, createSQL.contains("DEFAULT 1"))
+
+ // Test 2: Insert record with all fields specified
+ val record1 = DefaultValuesTest(
+ id = null,
+ name = "Test User",
+ status = "active",
+ loginCount = 0,
+ isEnabled = true,
+ createdAt = "2025-12-14 00:00:00"
+ )
+ database {
+ DefaultValuesTestTable { table ->
+ table INSERT record1
+ }
+ }
+
+ // Verify insertion
+ lateinit var selectStatement: SelectStatement
+ database {
+ selectStatement = DefaultValuesTestTable SELECT X
+ }
+ assertEquals(1, selectStatement.getResults().size)
+ val result = selectStatement.getResults()[0]
+ assertEquals("Test User", result.name)
+ assertEquals("active", result.status)
+ assertEquals(0, result.loginCount)
+
+ // Test 3: Nullable columns with default values
+ val nullableRecord = DefaultNullableTest(
+ id = null,
+ name = "Product A",
+ availability = "In Stock",
+ quantity = 100,
+ discount = 0.0
+ )
+ database {
+ DefaultNullableTestTable { table ->
+ table INSERT nullableRecord
+ }
+ }
+
+ lateinit var selectNullable: SelectStatement
+ database {
+ selectNullable = DefaultNullableTestTable SELECT X
+ }
+ assertEquals(1, selectNullable.getResults().size)
+ assertEquals("Product A", selectNullable.getResults()[0].name)
+ assertEquals("In Stock", selectNullable.getResults()[0].availability)
+ assertEquals(100, selectNullable.getResults()[0].quantity)
+ }
+ }
+
+ /**
+ * Test for @Default annotation with foreign key ON_DELETE_SET_DEFAULT trigger
+ * Verifies that default values are correctly included in CREATE TABLE statements with foreign keys
+ */
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ fun testDefaultValuesWithForeignKey() {
+ val config = DSLDBConfiguration(
+ name = DATABASE_NAME,
+ path = path,
+ version = 1,
+ create = {
+ PRAGMA_FOREIGN_KEYS(true)
+ CREATE(DefaultFKParentTable)
+ CREATE(DefaultFKChildTable)
+ }
+ )
+ Database(config, true).databaseAutoClose { database ->
+ // Test 1: Verify CREATE SQL contains DEFAULT with foreign key
+ val childSQL = DefaultFKChildTable.createSQL
+ assertEquals(true, childSQL.contains("parentId BIGINT NOT NULL DEFAULT 0"))
+ assertEquals(true, childSQL.contains("FOREIGN KEY"))
+ assertEquals(true, childSQL.contains("REFERENCES default_fk_parent(id)"))
+ assertEquals(true, childSQL.contains("ON DELETE SET DEFAULT"))
+
+ // Test 2: Verify we can insert parent and child records
+ val parent = DefaultFKParent(id = null, name = "Test Parent")
+ database {
+ DefaultFKParentTable { table ->
+ table INSERT parent
+ }
+ }
+
+ // Get parent ID
+ lateinit var parentSelect: SelectStatement
+ database {
+ parentSelect = DefaultFKParentTable SELECT X
+ }
+ val parents = parentSelect.getResults()
+ assertEquals(1, parents.size)
+ val parentId = parents[0].id!!
+
+ // Test 3: Insert child record referencing parent
+ val child = DefaultFKChild(id = null, parentId = parentId, description = "Test Child")
+ database {
+ DefaultFKChildTable { table ->
+ table INSERT child
+ }
+ }
+
+ // Verify child was inserted with correct parentId
+ lateinit var childSelect: SelectStatement
+ database {
+ childSelect = DefaultFKChildTable SELECT X
+ }
+ assertEquals(1, childSelect.getResults().size)
+ assertEquals(parentId, childSelect.getResults()[0].parentId)
+ assertEquals("Test Child", childSelect.getResults()[0].description)
+ }
+ }
+
private fun getDefaultDBConfig(): DatabaseConfiguration =
DatabaseConfiguration(
name = DATABASE_NAME,
@@ -1981,4 +2805,21 @@ class CommonBasicTest(private val path: DatabasePath) {
CREATE(CombinedConstraintsTestTable)
}
)
+
+ @OptIn(ExperimentalDSLDatabaseAPI::class)
+ private fun getForeignKeyDBConfig(): DSLDBConfiguration =
+ DSLDBConfiguration(
+ name = DATABASE_NAME,
+ path = path,
+ version = 1,
+ create = {
+ CREATE(FKUserTable)
+ CREATE(FKOrderTable)
+ CREATE(FKPostTable)
+ CREATE(FKProfileTable)
+ CREATE(FKProductTable)
+ CREATE(FKOrderItemTable)
+ CREATE(FKCommentTable)
+ }
+ )
}
\ No newline at end of file
diff --git a/sqllin-dsl-test/src/jvmTest/kotlin/com/ctrip/sqllin/dsl/test/JvmTest.kt b/sqllin-dsl-test/src/jvmTest/kotlin/com/ctrip/sqllin/dsl/test/JvmTest.kt
index a96fac0..9c5edd7 100644
--- a/sqllin-dsl-test/src/jvmTest/kotlin/com/ctrip/sqllin/dsl/test/JvmTest.kt
+++ b/sqllin-dsl-test/src/jvmTest/kotlin/com/ctrip/sqllin/dsl/test/JvmTest.kt
@@ -103,6 +103,48 @@ class JvmTest {
@Test
fun testNotNullConstraint() = commonTest.testNotNullConstraint()
+ @Test
+ fun testStringAggregateFunctions() = commonTest.testStringAggregateFunctions()
+
+ @Test
+ fun testIndexOperations() = commonTest.testIndexOperations()
+
+ @Test
+ fun testBlobLengthFunction() = commonTest.testBlobLengthFunction()
+
+ @Test
+ fun testPragmaForeignKeys() = commonTest.testPragmaForeignKeys()
+
+ @Test
+ fun testForeignKeyCascadeDelete() = commonTest.testForeignKeyCascadeDelete()
+
+ @Test
+ fun testForeignKeySetNullDelete() = commonTest.testForeignKeySetNullDelete()
+
+ @Test
+ fun testForeignKeyRestrictDelete() = commonTest.testForeignKeyRestrictDelete()
+
+ @Test
+ fun testCompositeForeignKey() = commonTest.testCompositeForeignKey()
+
+ @Test
+ fun testMultipleForeignKeys() = commonTest.testMultipleForeignKeys()
+
+ @Test
+ fun testForeignKeyCreateSQL() = commonTest.testForeignKeyCreateSQL()
+
+ @Test
+ fun testForeignKeyWithoutPragma() = commonTest.testForeignKeyWithoutPragma()
+
+ @Test
+ fun testDefaultValuesCreateSQL() = commonTest.testDefaultValuesCreateSQL()
+
+ @Test
+ fun testDefaultValuesInsert() = commonTest.testDefaultValuesInsert()
+
+ @Test
+ fun testDefaultValuesWithForeignKey() = commonTest.testDefaultValuesWithForeignKey()
+
@BeforeTest
fun setUp() {
deleteDatabase(path, CommonBasicTest.DATABASE_NAME)
diff --git a/sqllin-dsl-test/src/nativeTest/kotlin/com/ctrip/sqllin/dsl/test/NativeTest.kt b/sqllin-dsl-test/src/nativeTest/kotlin/com/ctrip/sqllin/dsl/test/NativeTest.kt
index 60ea196..f521246 100644
--- a/sqllin-dsl-test/src/nativeTest/kotlin/com/ctrip/sqllin/dsl/test/NativeTest.kt
+++ b/sqllin-dsl-test/src/nativeTest/kotlin/com/ctrip/sqllin/dsl/test/NativeTest.kt
@@ -119,6 +119,48 @@ class NativeTest {
@Test
fun testNotNullConstraint() = commonTest.testNotNullConstraint()
+ @Test
+ fun testStringAggregateFunctions() = commonTest.testStringAggregateFunctions()
+
+ @Test
+ fun testIndexOperations() = commonTest.testIndexOperations()
+
+ @Test
+ fun testBlobLengthFunction() = commonTest.testBlobLengthFunction()
+
+ @Test
+ fun testPragmaForeignKeys() = commonTest.testPragmaForeignKeys()
+
+ @Test
+ fun testForeignKeyCascadeDelete() = commonTest.testForeignKeyCascadeDelete()
+
+ @Test
+ fun testForeignKeySetNullDelete() = commonTest.testForeignKeySetNullDelete()
+
+ @Test
+ fun testForeignKeyRestrictDelete() = commonTest.testForeignKeyRestrictDelete()
+
+ @Test
+ fun testCompositeForeignKey() = commonTest.testCompositeForeignKey()
+
+ @Test
+ fun testMultipleForeignKeys() = commonTest.testMultipleForeignKeys()
+
+ @Test
+ fun testForeignKeyCreateSQL() = commonTest.testForeignKeyCreateSQL()
+
+ @Test
+ fun testForeignKeyWithoutPragma() = commonTest.testForeignKeyWithoutPragma()
+
+ @Test
+ fun testDefaultValuesCreateSQL() = commonTest.testDefaultValuesCreateSQL()
+
+ @Test
+ fun testDefaultValuesInsert() = commonTest.testDefaultValuesInsert()
+
+ @Test
+ fun testDefaultValuesWithForeignKey() = commonTest.testDefaultValuesWithForeignKey()
+
@BeforeTest
fun setUp() {
deleteDatabase(path, CommonBasicTest.DATABASE_NAME)
diff --git a/sqllin-dsl/doc/getting-start-cn.md b/sqllin-dsl/doc/getting-start-cn.md
index 08fd6ba..931f289 100644
--- a/sqllin-dsl/doc/getting-start-cn.md
+++ b/sqllin-dsl/doc/getting-start-cn.md
@@ -400,6 +400,82 @@ data class Product(
)
```
+#### @Default - 列默认值
+
+使用 `@Default` 为 CREATE TABLE 语句中的列指定默认值。当插入行时未显式提供这些列的值时,SQLite 会自动使用这些默认值:
+
+```kotlin
+import com.ctrip.sqllin.dsl.annotation.DBRow
+import com.ctrip.sqllin.dsl.annotation.PrimaryKey
+import com.ctrip.sqllin.dsl.annotation.Default
+import kotlinx.serialization.Serializable
+
+@DBRow
+@Serializable
+data class User(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ val name: String,
+ @Default("'active'") val status: String, // String default
+ @Default("0") val loginCount: Int, // Numeric default
+ @Default("1") val isEnabled: Boolean, // Boolean default (1 = true)
+ @Default("CURRENT_TIMESTAMP") val createdAt: String, // SQLite function
+)
+// Generated SQL: CREATE TABLE User(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// name TEXT NOT NULL,
+// status TEXT NOT NULL DEFAULT 'active',
+// loginCount INT NOT NULL DEFAULT 0,
+// isEnabled INT NOT NULL DEFAULT 1,
+// createdAt TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
+// )
+```
+
+**值格式:**
+- **字符串**:必须用单引号括起来:`'默认文本'`
+- **数字**:纯数字字面量:`0`、`42`、`3.14`
+- **布尔值**:用 `0` 表示 false,用 `1` 表示 true
+- **NULL**:使用字面量 `NULL`
+- **表达式**:SQLite 函数,如 `CURRENT_TIMESTAMP`、`datetime('now')`、`(random())` 等
+
+**与外键触发器的集成:**
+
+当使用 `ON_DELETE_SET_DEFAULT` 或 `ON_UPDATE_SET_DEFAULT` 触发器时,**必须**设置默认值:
+
+```kotlin
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(
+ tableName = "User",
+ foreignKeys = ["id"],
+ trigger = Trigger.ON_DELETE_SET_DEFAULT
+ )
+ @Default("0") // REQUIRED when using ON_DELETE_SET_DEFAULT
+ val userId: Long,
+ val amount: Double,
+)
+// When a User is deleted, their Orders' userId becomes 0
+```
+
+**重要注意事项:**
+- **字符串值必须使用单引号**:`'text'`,而不是 `"text"`
+- 默认值不会覆盖 INSERT 语句中显式提供的值
+- 像 `CURRENT_TIMESTAMP` 这样的函数在插入时求值,而不是在创建表时
+- 注解处理器不会验证默认值是否与列类型匹配
+
+**常见陷阱:**
+
+```kotlin
+// ❌ Wrong - using double quotes for strings
+@Default("\"active\"")
+val status: String
+
+// ✅ Correct - using single quotes for strings
+@Default("'active'")
+val status: String
+```
+
### 支持的类型
SQLlin 支持以下 Kotlin 类型用于 `@DBRow` 数据类的属性:
@@ -487,6 +563,341 @@ data class User(
| ByteArray | BLOB |
| Enum | INT |
+### 外键约束
+
+SQLlin 提供了对外键约束的全面支持,以维护表之间的引用完整性。外键通过在插入、更新或删除数据时强制执行规则,确保表之间的关系保持一致。
+
+#### 重要:启用外键
+
+默认情况下,SQLite **不会强制执行**外键约束(为了向后兼容)。你必须在创建表之前使用 `PRAGMA_FOREIGN_KEYS(true)` 显式启用外键强制执行:
+
+```kotlin
+database {
+ // CRITICAL: Enable foreign key enforcement first
+ PRAGMA_FOREIGN_KEYS(true)
+
+ // Now create tables with foreign keys
+ CREATE(UserTable)
+ CREATE(OrderTable) // Has foreign key to UserTable
+}
+```
+
+**关键点:**
+- 此设置是**每个连接**的,必须在每次打开数据库时设置
+- 此设置**不能**在事务内部更改
+- 如果不启用此设置,外键将成为模式的一部分但**不会被强制执行**
+
+#### 定义外键
+
+SQLlin 提供了两种定义外键的方法:
+
+##### 方法 1:使用 @References 的列级外键
+
+对于简单的单列外键,使用 `@References`。这是**大多数用例的推荐方法**:
+
+```kotlin
+import com.ctrip.sqllin.dsl.annotation.DBRow
+import com.ctrip.sqllin.dsl.annotation.PrimaryKey
+import com.ctrip.sqllin.dsl.annotation.References
+import com.ctrip.sqllin.dsl.annotation.Trigger
+import kotlinx.serialization.Serializable
+
+@DBRow
+@Serializable
+data class User(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ val name: String,
+ val email: String,
+)
+
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(
+ tableName = "User",
+ foreignKeys = ["id"],
+ trigger = Trigger.ON_DELETE_CASCADE
+ )
+ val userId: Long,
+ val amount: Double,
+ val orderDate: String,
+)
+// Generated SQL: CREATE TABLE Order(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// userId BIGINT REFERENCES User(id) ON DELETE CASCADE,
+// amount DOUBLE,
+// orderDate TEXT
+// )
+```
+
+##### 方法 2:使用 @ForeignKeyGroup + @ForeignKey 的表级外键
+
+对于引用多个列的组合外键,使用此方法:
+
+```kotlin
+import com.ctrip.sqllin.dsl.annotation.DBRow
+import com.ctrip.sqllin.dsl.annotation.PrimaryKey
+import com.ctrip.sqllin.dsl.annotation.CompositePrimaryKey
+import com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup
+import com.ctrip.sqllin.dsl.annotation.ForeignKey
+import com.ctrip.sqllin.dsl.annotation.Trigger
+import kotlinx.serialization.Serializable
+
+@DBRow
+@Serializable
+data class Product(
+ @CompositePrimaryKey val categoryId: Int,
+ @CompositePrimaryKey val productCode: String,
+ val name: String,
+ val price: Double,
+)
+
+@DBRow
+@Serializable
+@ForeignKeyGroup(
+ group = 0,
+ tableName = "Product",
+ trigger = Trigger.ON_DELETE_CASCADE,
+ constraintName = "fk_product"
+)
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @ForeignKey(group = 0, reference = "categoryId")
+ val productCategory: Int,
+ @ForeignKey(group = 0, reference = "productCode")
+ val productCode: String,
+ val quantity: Int,
+)
+// Generated SQL: CREATE TABLE OrderItem(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// productCategory INT,
+// productCode TEXT,
+// quantity INT,
+// CONSTRAINT fk_product FOREIGN KEY (productCategory,productCode)
+// REFERENCES Product(categoryId,productCode) ON DELETE CASCADE
+// )
+```
+
+#### 引用操作(触发器)
+
+触发器定义了当被引用的行被删除或更新时会发生什么。SQLlin 通过 `Trigger` 枚举支持所有标准 SQLite 触发器:
+
+##### DELETE 触发器
+
+**ON_DELETE_CASCADE**:当父行被删除时,自动删除子行
+```kotlin
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ val userId: Long,
+ val amount: Double,
+)
+// When a User is deleted, all their Orders are automatically deleted
+```
+
+**ON_DELETE_SET_NULL**:当父行被删除时,将外键设置为 NULL(需要可空列)
+```kotlin
+@DBRow
+@Serializable
+data class Post(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_NULL)
+ val authorId: Long?, // Must be nullable!
+ val content: String,
+)
+// When a User is deleted, their Posts remain but authorId becomes NULL
+```
+
+**ON_DELETE_RESTRICT**:如果存在子行,阻止删除父行
+```kotlin
+@DBRow
+@Serializable
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "Order", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_RESTRICT)
+ val orderId: Long,
+ val productId: Long,
+)
+// An Order cannot be deleted if it has OrderItems
+```
+
+**ON_DELETE_SET_DEFAULT**:当父行被删除时,将外键设置为其默认值
+```kotlin
+@DBRow
+@Serializable
+data class Comment(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_DEFAULT)
+ val userId: Long = 0L, // Default to 0 (anonymous user)
+ val content: String,
+)
+```
+
+##### UPDATE 触发器
+
+UPDATE 操作也有相同的操作:
+- `ON_UPDATE_CASCADE`:当父主键更改时,更新子外键
+- `ON_UPDATE_SET_NULL`:将子外键设置为 NULL(需要可空列)
+- `ON_UPDATE_RESTRICT`:如果存在子行,阻止更新父主键
+- `ON_UPDATE_SET_DEFAULT`:将子外键设置为默认值
+
+##### 触发器行为摘要
+
+| 触发器 | 父行删除/更新 | 子行行为 | 需要可空? |
+|---------|------------------------|----------------|-------------------|
+| NULL(默认) | 允许 | 无变化 | 否 |
+| CASCADE | 允许 | 子行被删除/更新 | 否 |
+| SET_NULL | 允许 | 外键设置为 NULL | **是** |
+| SET_DEFAULT | 允许 | 外键设置为 DEFAULT | 否 |
+| RESTRICT | **阻止** | 操作失败 | 否 |
+
+#### 多个外键
+
+一个表可以有多个指向不同父表的外键约束:
+
+```kotlin
+@DBRow
+@Serializable
+@ForeignKeyGroup(group = 0, tableName = "User", trigger = Trigger.ON_DELETE_CASCADE)
+@ForeignKeyGroup(group = 1, tableName = "Product", trigger = Trigger.ON_DELETE_RESTRICT)
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @ForeignKey(group = 0, reference = "id") val userId: Long,
+ @ForeignKey(group = 1, reference = "id") val productId: Long,
+ val quantity: Int,
+)
+// Generated SQL: CREATE TABLE OrderItem(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// userId BIGINT,
+// productId BIGINT,
+// quantity INT,
+// FOREIGN KEY (userId) REFERENCES User(id) ON DELETE CASCADE,
+// FOREIGN KEY (productId) REFERENCES Product(id) ON DELETE RESTRICT
+// )
+```
+
+或使用 `@References`:
+```kotlin
+@DBRow
+@Serializable
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ val userId: Long,
+ @References(tableName = "Product", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_RESTRICT)
+ val productId: Long,
+ val quantity: Int,
+)
+```
+
+#### 命名约束
+
+你可以选择为外键约束命名,以获得更好的错误消息和模式内省:
+
+```kotlin
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(
+ tableName = "User",
+ foreignKeys = ["id"],
+ trigger = Trigger.ON_DELETE_CASCADE,
+ constraintName = "fk_order_user" // 可选的约束名称
+ )
+ val userId: Long,
+)
+// Generated SQL: userId BIGINT CONSTRAINT fk_order_user REFERENCES User(id) ON DELETE CASCADE
+```
+
+#### 最佳实践
+
+1. **始终启用外键**:在每个数据库会话开始时调用 `PRAGMA_FOREIGN_KEYS(true)`
+2. **先创建父表**:在创建具有外键的表之前创建被引用的表
+3. **对依赖数据使用 CASCADE**:当子数据不应该在没有父数据的情况下存在时使用 `ON_DELETE_CASCADE`
+4. **对可选关系使用 SET_NULL**:当子数据可以独立存在时使用 `ON_DELETE_SET_NULL`
+5. **使用 RESTRICT 进行保护**:使用 `ON_DELETE_RESTRICT` 防止意外删除父数据
+6. **考虑可空列**:当关系是可选的时使用可空的外键列
+7. **命名你的约束**:使用 `constraintName` 参数以获得更好的调试和错误消息
+
+#### 完整示例
+
+这是一个演示外键关系的完整示例:
+
+```kotlin
+import com.ctrip.sqllin.dsl.Database
+import com.ctrip.sqllin.dsl.annotation.*
+import kotlinx.serialization.Serializable
+
+// Parent table: Users
+@DBRow
+@Serializable
+data class User(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @Unique val email: String,
+ val name: String,
+)
+
+// Child table: Orders with CASCADE delete
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ val userId: Long,
+ val amount: Double,
+ val orderDate: String,
+)
+
+// Child table: Posts with SET_NULL delete
+@DBRow
+@Serializable
+data class Post(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_NULL)
+ val authorId: Long?, // Nullable - posts can exist without author
+ val title: String,
+ val content: String,
+)
+
+fun setupDatabase() {
+ database {
+ // CRITICAL: Enable foreign key enforcement
+ PRAGMA_FOREIGN_KEYS(true)
+
+ // Create parent table first
+ CREATE(UserTable)
+
+ // Then create child tables
+ CREATE(OrderTable)
+ CREATE(PostTable)
+
+ // Insert some data
+ val user = User(id = null, email = "alice@example.com", name = "Alice")
+ UserTable INSERT user
+
+ val order = Order(id = null, userId = 1L, amount = 99.99, orderDate = "2025-01-15")
+ OrderTable INSERT order
+
+ // This will fail because user 999 doesn't exist (foreign key violation)
+ try {
+ val invalidOrder = Order(id = null, userId = 999L, amount = 50.0, orderDate = "2025-01-15")
+ OrderTable INSERT invalidOrder // Throws exception!
+ } catch (e: Exception) {
+ println("Foreign key constraint violation: ${e.message}")
+ }
+
+ // Delete the user - CASCADE will delete their orders, SET_NULL will null post authors
+ UserTable DELETE WHERE(UserTable.id EQ 1L)
+ // All orders for user 1 are automatically deleted
+ // All posts by user 1 have authorId set to NULL
+ }
+}
+```
+
## 接下来
你已经学习完了所有的准备工作,现在可以开始学习如何操作数据库了:
diff --git a/sqllin-dsl/doc/getting-start.md b/sqllin-dsl/doc/getting-start.md
index b141e2a..439e679 100644
--- a/sqllin-dsl/doc/getting-start.md
+++ b/sqllin-dsl/doc/getting-start.md
@@ -410,6 +410,82 @@ data class Product(
)
```
+#### @Default - Column Default Values
+
+Use `@Default` to specify default values for columns in your CREATE TABLE statements. SQLite will automatically use these values when inserting rows without explicitly providing values for these columns:
+
+```kotlin
+import com.ctrip.sqllin.dsl.annotation.DBRow
+import com.ctrip.sqllin.dsl.annotation.PrimaryKey
+import com.ctrip.sqllin.dsl.annotation.Default
+import kotlinx.serialization.Serializable
+
+@DBRow
+@Serializable
+data class User(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ val name: String,
+ @Default("'active'") val status: String, // String default
+ @Default("0") val loginCount: Int, // Numeric default
+ @Default("1") val isEnabled: Boolean, // Boolean default (1 = true)
+ @Default("CURRENT_TIMESTAMP") val createdAt: String, // SQLite function
+)
+// Generated SQL: CREATE TABLE User(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// name TEXT NOT NULL,
+// status TEXT NOT NULL DEFAULT 'active',
+// loginCount INT NOT NULL DEFAULT 0,
+// isEnabled INT NOT NULL DEFAULT 1,
+// createdAt TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
+// )
+```
+
+**Value format:**
+- **Strings**: Must be enclosed in single quotes: `'default text'`
+- **Numbers**: Plain numeric literals: `0`, `42`, `3.14`
+- **Booleans**: Use `0` for false or `1` for true
+- **NULL**: Use the literal `NULL`
+- **Expressions**: SQLite functions like `CURRENT_TIMESTAMP`, `datetime('now')`, `(random())`, etc.
+
+**Integration with Foreign Key Triggers:**
+
+Default values are **required** when using `ON_DELETE_SET_DEFAULT` or `ON_UPDATE_SET_DEFAULT` triggers:
+
+```kotlin
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(
+ tableName = "User",
+ foreignKeys = ["id"],
+ trigger = Trigger.ON_DELETE_SET_DEFAULT
+ )
+ @Default("0") // REQUIRED when using ON_DELETE_SET_DEFAULT
+ val userId: Long,
+ val amount: Double,
+)
+// When a User is deleted, their Orders' userId becomes 0
+```
+
+**Important notes:**
+- **String values must use single quotes**: `'text'`, not `"text"`
+- Default values don't override explicitly provided values in INSERT statements
+- Functions like `CURRENT_TIMESTAMP` are evaluated at insertion time, not at table creation
+- The annotation processor doesn't validate that the default value matches the column type
+
+**Common pitfall:**
+
+```kotlin
+// ❌ Wrong - using double quotes for strings
+@Default("\"active\"")
+val status: String
+
+// ✅ Correct - using single quotes for strings
+@Default("'active'")
+val status: String
+```
+
### Supported Types
SQLlin supports the following Kotlin types for properties in `@DBRow` data classes:
@@ -497,6 +573,341 @@ data class User(
| ByteArray | BLOB |
| Enum | INT |
+### Foreign Key Constraints
+
+SQLlin provides comprehensive support for foreign key constraints to maintain referential integrity between tables. Foreign keys ensure that relationships between tables remain consistent by enforcing rules when data is inserted, updated, or deleted.
+
+#### Important: Enabling Foreign Keys
+
+By default, SQLite **does not enforce** foreign key constraints for backward compatibility. You must explicitly enable foreign key enforcement using `PRAGMA_FOREIGN_KEYS(true)` before creating tables:
+
+```kotlin
+database {
+ // CRITICAL: Enable foreign key enforcement first
+ PRAGMA_FOREIGN_KEYS(true)
+
+ // Now create tables with foreign keys
+ CREATE(UserTable)
+ CREATE(OrderTable) // Has foreign key to UserTable
+}
+```
+
+**Key points:**
+- This setting is **per-connection** and must be set each time you open the database
+- The setting **cannot be changed** inside a transaction
+- Without enabling this, foreign keys will be part of the schema but **not enforced**
+
+#### Defining Foreign Keys
+
+SQLlin provides two approaches for defining foreign keys:
+
+##### Approach 1: Column-Level with @References
+
+Use `@References` for simple single-column foreign keys. This is the **recommended approach** for most use cases:
+
+```kotlin
+import com.ctrip.sqllin.dsl.annotation.DBRow
+import com.ctrip.sqllin.dsl.annotation.PrimaryKey
+import com.ctrip.sqllin.dsl.annotation.References
+import com.ctrip.sqllin.dsl.annotation.Trigger
+import kotlinx.serialization.Serializable
+
+@DBRow
+@Serializable
+data class User(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ val name: String,
+ val email: String,
+)
+
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(
+ tableName = "User",
+ foreignKeys = ["id"],
+ trigger = Trigger.ON_DELETE_CASCADE
+ )
+ val userId: Long,
+ val amount: Double,
+ val orderDate: String,
+)
+// Generated SQL: CREATE TABLE Order(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// userId BIGINT REFERENCES User(id) ON DELETE CASCADE,
+// amount DOUBLE,
+// orderDate TEXT
+// )
+```
+
+##### Approach 2: Table-Level with @ForeignKeyGroup + @ForeignKey
+
+Use this approach for composite foreign keys that reference multiple columns:
+
+```kotlin
+import com.ctrip.sqllin.dsl.annotation.DBRow
+import com.ctrip.sqllin.dsl.annotation.PrimaryKey
+import com.ctrip.sqllin.dsl.annotation.CompositePrimaryKey
+import com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup
+import com.ctrip.sqllin.dsl.annotation.ForeignKey
+import com.ctrip.sqllin.dsl.annotation.Trigger
+import kotlinx.serialization.Serializable
+
+@DBRow
+@Serializable
+data class Product(
+ @CompositePrimaryKey val categoryId: Int,
+ @CompositePrimaryKey val productCode: String,
+ val name: String,
+ val price: Double,
+)
+
+@DBRow
+@Serializable
+@ForeignKeyGroup(
+ group = 0,
+ tableName = "Product",
+ trigger = Trigger.ON_DELETE_CASCADE,
+ constraintName = "fk_product"
+)
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @ForeignKey(group = 0, reference = "categoryId")
+ val productCategory: Int,
+ @ForeignKey(group = 0, reference = "productCode")
+ val productCode: String,
+ val quantity: Int,
+)
+// Generated SQL: CREATE TABLE OrderItem(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// productCategory INT,
+// productCode TEXT,
+// quantity INT,
+// CONSTRAINT fk_product FOREIGN KEY (productCategory,productCode)
+// REFERENCES Product(categoryId,productCode) ON DELETE CASCADE
+// )
+```
+
+#### Referential Actions (Triggers)
+
+Triggers define what happens when a referenced row is deleted or updated. SQLlin supports all standard SQLite triggers via the `Trigger` enum:
+
+##### DELETE Triggers
+
+**ON_DELETE_CASCADE**: Automatically delete child rows when parent is deleted
+```kotlin
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ val userId: Long,
+ val amount: Double,
+)
+// When a User is deleted, all their Orders are automatically deleted
+```
+
+**ON_DELETE_SET_NULL**: Set foreign key to NULL when parent is deleted (requires nullable column)
+```kotlin
+@DBRow
+@Serializable
+data class Post(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_NULL)
+ val authorId: Long?, // Must be nullable!
+ val content: String,
+)
+// When a User is deleted, their Posts remain but authorId becomes NULL
+```
+
+**ON_DELETE_RESTRICT**: Prevent deletion of parent if children exist
+```kotlin
+@DBRow
+@Serializable
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "Order", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_RESTRICT)
+ val orderId: Long,
+ val productId: Long,
+)
+// An Order cannot be deleted if it has OrderItems
+```
+
+**ON_DELETE_SET_DEFAULT**: Set foreign key to its default value when parent is deleted
+```kotlin
+@DBRow
+@Serializable
+data class Comment(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_DEFAULT)
+ val userId: Long = 0L, // Default to 0 (anonymous user)
+ val content: String,
+)
+```
+
+##### UPDATE Triggers
+
+The same actions are available for UPDATE operations:
+- `ON_UPDATE_CASCADE`: Update child foreign keys when parent primary key changes
+- `ON_UPDATE_SET_NULL`: Set child foreign keys to NULL (requires nullable column)
+- `ON_UPDATE_RESTRICT`: Prevent updating parent primary key if children exist
+- `ON_UPDATE_SET_DEFAULT`: Set child foreign keys to default value
+
+##### Trigger Behavior Summary
+
+| Trigger | Parent Deleted/Updated | Child Behavior | Nullable Required? |
+|---------|------------------------|----------------|-------------------|
+| NULL (default) | Allowed | No change | No |
+| CASCADE | Allowed | Child rows deleted/updated | No |
+| SET_NULL | Allowed | Foreign key set to NULL | **Yes** |
+| SET_DEFAULT | Allowed | Foreign key set to DEFAULT | No |
+| RESTRICT | **Prevented** | Operation fails | No |
+
+#### Multiple Foreign Keys
+
+A table can have multiple foreign key constraints to different parent tables:
+
+```kotlin
+@DBRow
+@Serializable
+@ForeignKeyGroup(group = 0, tableName = "User", trigger = Trigger.ON_DELETE_CASCADE)
+@ForeignKeyGroup(group = 1, tableName = "Product", trigger = Trigger.ON_DELETE_RESTRICT)
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @ForeignKey(group = 0, reference = "id") val userId: Long,
+ @ForeignKey(group = 1, reference = "id") val productId: Long,
+ val quantity: Int,
+)
+// Generated SQL: CREATE TABLE OrderItem(
+// id INTEGER PRIMARY KEY AUTOINCREMENT,
+// userId BIGINT,
+// productId BIGINT,
+// quantity INT,
+// FOREIGN KEY (userId) REFERENCES User(id) ON DELETE CASCADE,
+// FOREIGN KEY (productId) REFERENCES Product(id) ON DELETE RESTRICT
+// )
+```
+
+Or using `@References`:
+```kotlin
+@DBRow
+@Serializable
+data class OrderItem(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ val userId: Long,
+ @References(tableName = "Product", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_RESTRICT)
+ val productId: Long,
+ val quantity: Int,
+)
+```
+
+#### Named Constraints
+
+You can optionally name your foreign key constraints for better error messages and schema introspection:
+
+```kotlin
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(
+ tableName = "User",
+ foreignKeys = ["id"],
+ trigger = Trigger.ON_DELETE_CASCADE,
+ constraintName = "fk_order_user" // Optional constraint name
+ )
+ val userId: Long,
+)
+// Generated SQL: userId BIGINT CONSTRAINT fk_order_user REFERENCES User(id) ON DELETE CASCADE
+```
+
+#### Best Practices
+
+1. **Always enable foreign keys**: Call `PRAGMA_FOREIGN_KEYS(true)` at the start of each database session
+2. **Create parent tables first**: Create referenced tables before creating tables with foreign keys to them
+3. **Use CASCADE for dependent data**: Use `ON_DELETE_CASCADE` when child data should not exist without its parent
+4. **Use SET_NULL for optional relationships**: Use `ON_DELETE_SET_NULL` when child data can exist independently
+5. **Use RESTRICT for protection**: Use `ON_DELETE_RESTRICT` to prevent accidental deletion of parent data
+6. **Consider nullable columns**: Use nullable foreign key columns when the relationship is optional
+7. **Name your constraints**: Use `constraintName` parameter for better debugging and error messages
+
+#### Complete Example
+
+Here's a complete example demonstrating foreign key relationships:
+
+```kotlin
+import com.ctrip.sqllin.dsl.Database
+import com.ctrip.sqllin.dsl.annotation.*
+import kotlinx.serialization.Serializable
+
+// Parent table: Users
+@DBRow
+@Serializable
+data class User(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @Unique val email: String,
+ val name: String,
+)
+
+// Child table: Orders with CASCADE delete
+@DBRow
+@Serializable
+data class Order(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ val userId: Long,
+ val amount: Double,
+ val orderDate: String,
+)
+
+// Child table: Posts with SET_NULL delete
+@DBRow
+@Serializable
+data class Post(
+ @PrimaryKey(isAutoincrement = true) val id: Long?,
+ @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_NULL)
+ val authorId: Long?, // Nullable - posts can exist without author
+ val title: String,
+ val content: String,
+)
+
+fun setupDatabase() {
+ database {
+ // CRITICAL: Enable foreign key enforcement
+ PRAGMA_FOREIGN_KEYS(true)
+
+ // Create parent table first
+ CREATE(UserTable)
+
+ // Then create child tables
+ CREATE(OrderTable)
+ CREATE(PostTable)
+
+ // Insert some data
+ val user = User(id = null, email = "alice@example.com", name = "Alice")
+ UserTable INSERT user
+
+ val order = Order(id = null, userId = 1L, amount = 99.99, orderDate = "2025-01-15")
+ OrderTable INSERT order
+
+ // This will fail because user 999 doesn't exist (foreign key violation)
+ try {
+ val invalidOrder = Order(id = null, userId = 999L, amount = 50.0, orderDate = "2025-01-15")
+ OrderTable INSERT invalidOrder // Throws exception!
+ } catch (e: Exception) {
+ println("Foreign key constraint violation: ${e.message}")
+ }
+
+ // Delete the user - CASCADE will delete their orders, SET_NULL will null post authors
+ UserTable DELETE WHERE(UserTable.id EQ 1L)
+ // All orders for user 1 are automatically deleted
+ // All posts by user 1 have authorId set to NULL
+ }
+}
+```
+
## Next Step
You have learned all the preparations, you can start learn how to operate database now:
diff --git a/sqllin-dsl/doc/sql-functions-cn.md b/sqllin-dsl/doc/sql-functions-cn.md
index 8b79262..4ef19ed 100644
--- a/sqllin-dsl/doc/sql-functions-cn.md
+++ b/sqllin-dsl/doc/sql-functions-cn.md
@@ -18,7 +18,11 @@ fun sample() {
会帮助我们生成一些 `ClauseElement` 来表示列名。SQL 函数将会接收一个 `ClauseElement` 作为参数并返回一个
`ClauseElement` 作为结果。SQLlin 支持的函数如下:
-> `count`, `max`, `min`, `avg`, `sum`, `abs`, `upper`, `lower`, `length`
+> **聚合函数**: `count`, `max`, `min`, `avg`, `sum`, `group_concat`
+>
+> **数值函数**: `abs`, `round`, `random`, `sign`
+>
+> **字符串函数**: `upper`, `lower`, `length`, `substr`, `trim`, `ltrim`, `rtrim`, `replace`, `instr`, `printf`
`count` 函数有一个不同点,它可以接收一个 `X` 作为参数用于表示 SQL 中的 `count(*)`, 如前面的示例所示。
diff --git a/sqllin-dsl/doc/sql-functions.md b/sqllin-dsl/doc/sql-functions.md
index efa4c99..17fc5be 100644
--- a/sqllin-dsl/doc/sql-functions.md
+++ b/sqllin-dsl/doc/sql-functions.md
@@ -22,7 +22,11 @@ In [Modify Database and Transaction](modify-database-and-transaction.md), we hav
generate some `ClauseElement`s to represent column names. SQL functions will receive a `ClauseElement` as a parameter and return
a `ClauseElement` as the result. The functions supported by SQLlin are as follows:
-> `count`, `max`, `min`, `avg`, `sum`, `abs`, `upper`, `lower`, `length`
+> **Aggregate functions**: `count`, `max`, `min`, `avg`, `sum`, `group_concat`
+>
+> **Numeric functions**: `abs`, `round`, `random`, `sign`
+>
+> **String functions**: `upper`, `lower`, `length`, `substr`, `trim`, `ltrim`, `rtrim`, `replace`, `instr`, `printf`
The `count` function has a different point, it could receive `X` as parameter be used for representing `count(*)` in SQL, as shown in the
example above.
diff --git a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/DatabaseScope.kt b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/DatabaseScope.kt
index 8518b28..1d294e7 100644
--- a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/DatabaseScope.kt
+++ b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/DatabaseScope.kt
@@ -28,6 +28,7 @@ import com.ctrip.sqllin.dsl.sql.operation.Create
import com.ctrip.sqllin.dsl.sql.operation.Delete
import com.ctrip.sqllin.dsl.sql.operation.Drop
import com.ctrip.sqllin.dsl.sql.operation.Insert
+import com.ctrip.sqllin.dsl.sql.operation.PRAGMA
import com.ctrip.sqllin.dsl.sql.operation.Select
import com.ctrip.sqllin.dsl.sql.operation.Update
import com.ctrip.sqllin.dsl.sql.statement.*
@@ -560,7 +561,7 @@ public class DatabaseScope internal constructor(
@ExperimentalDSLDatabaseAPI
@StatementDslMaker
public infix fun CREATE(table: Table) {
- val statement = Create.create(table, databaseConnection)
+ val statement = Create.createTable(table, databaseConnection)
addStatement(statement)
}
@@ -572,6 +573,57 @@ public class DatabaseScope internal constructor(
@JvmName("create")
public fun Table.CREATE(): Unit = CREATE(this)
+ /**
+ * Creates an index on the specified columns of this table.
+ *
+ * Indexes improve query performance by allowing faster lookups on the indexed columns.
+ * However, they also consume additional storage space and may slow down INSERT, UPDATE,
+ * and DELETE operations.
+ *
+ * Example:
+ * ```kotlin
+ * database {
+ * User::class.table.CREATE_INDEX("idx_user_email", User::email)
+ * User::class.table.CREATE_INDEX("idx_user_name_age", User::name, User::age)
+ * }
+ * ```
+ *
+ * @param indexName The name of the index to create
+ * @param columns One or more column elements to include in the index
+ * @throws IllegalArgumentException if no columns are specified
+ */
+ @ExperimentalDSLDatabaseAPI
+ @StatementDslMaker
+ public fun Table.CREATE_INDEX(indexName: String, vararg columns: ClauseElement) {
+ val statement = Create.createIndex(this, databaseConnection, indexName, *columns)
+ addStatement(statement)
+ }
+
+ /**
+ * Creates a unique index on the specified columns of this table.
+ *
+ * A unique index enforces uniqueness constraints on the indexed columns, preventing
+ * duplicate values. It also improves query performance like a regular index.
+ *
+ * Example:
+ * ```kotlin
+ * database {
+ * User::class.table.CREATE_UNIQUE_INDEX("idx_unique_email", User::email)
+ * Product::class.table.CREATE_UNIQUE_INDEX("idx_unique_sku", Product::sku)
+ * }
+ * ```
+ *
+ * @param indexName The name of the unique index to create
+ * @param columns One or more column elements to include in the unique index
+ * @throws IllegalArgumentException if no columns are specified
+ */
+ @ExperimentalDSLDatabaseAPI
+ @StatementDslMaker
+ public fun Table.CREATE_UNIQUE_INDEX(indexName: String, vararg columns: ClauseElement) {
+ val statement = Create.createUniqueIndex(this, databaseConnection, indexName, *columns)
+ addStatement(statement)
+ }
+
// ========== DROP Operations ==========
/**
@@ -750,4 +802,50 @@ public class DatabaseScope internal constructor(
val statement = Alert.dropColumn(this, column, databaseConnection)
addStatement(statement)
}
+
+ /**
+ * Enables or disables foreign key constraint enforcement in SQLite.
+ *
+ * **⚠️ IMPORTANT**: By default, SQLite **does not enforce** foreign key constraints.
+ * You must explicitly enable them using this function before foreign key constraints
+ * will take effect. This setting is per-connection and must be set each time you
+ * open a database connection.
+ *
+ * ### When to Use
+ * - Call this **before** creating tables with foreign key constraints
+ * - Call this at the **beginning** of each database session if you want foreign key enforcement
+ * - Set to `false` if you need to temporarily disable constraints (e.g., during bulk operations)
+ *
+ * ### Example
+ * ```kotlin
+ * database {
+ * // Enable foreign key enforcement
+ * PRAGMA_FOREIGN_KEYS(true)
+ *
+ * // Now foreign key constraints will be enforced
+ * CREATE(OrderTable) // Table with foreign key to UserTable
+ * OrderTable INSERT Order(userId = 999) // Will fail if user 999 doesn't exist
+ * }
+ * ```
+ *
+ * ### SQLite Behavior
+ * - When enabled (`true`): SQLite enforces all foreign key constraints
+ * - INSERT/UPDATE operations that violate constraints will fail
+ * - DELETE operations trigger ON DELETE actions (CASCADE, SET NULL, etc.)
+ * - When disabled (`false`): Foreign key constraints are ignored
+ * - Constraints are still part of the schema but not enforced
+ * - Useful for data migration or bulk operations
+ *
+ * @param flag `true` to enable foreign key enforcement, `false` to disable
+ *
+ * @see ForeignKeyGroup
+ * @see ForeignKey
+ * @see References
+ */
+ @ExperimentalDSLDatabaseAPI
+ @StatementDslMaker
+ public infix fun PRAGMA_FOREIGN_KEYS(flag: Boolean) {
+ val statement = PRAGMA.foreignKeys(flag, databaseConnection)
+ addStatement(statement)
+ }
}
\ No newline at end of file
diff --git a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/ColumnModifier.kt b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/ColumnModifier.kt
deleted file mode 100644
index df0838d..0000000
--- a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/ColumnModifier.kt
+++ /dev/null
@@ -1,222 +0,0 @@
-/*
- * Copyright (C) 2025 Ctrip.com.
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-package com.ctrip.sqllin.dsl.annotation
-
-/**
- * Modifiers for columns in a table
- * @author Yuang Qiao
- */
-
-/**
- * Marks a property as the primary key for a table within a class annotated with [DBRow].
- *
- * This annotation defines how a data model maps to the primary key of a database table.
- * Within a given `@DBRow` class, **only one** property can be marked with this annotation.
- * To define a primary key that consists of multiple columns, use the [CompositePrimaryKey] annotation instead.
- * Additionally, if a property in the class is marked with [PrimaryKey], the class cannot also use the [CompositePrimaryKey] annotation.
- *
- * ### Type and Nullability Rules
- * The behavior of this annotation differs based on the type of property it annotates.
- * The following rules must be followed:
- *
- * - **When annotating a `Long` property**:
- * The property **must** be declared as a nullable type (`Long?`). This triggers a special
- * SQLite mechanism, mapping the property to an `INTEGER PRIMARY KEY` column, which acts as
- * an alias for the database's internal `rowid`. This is typically used for auto-incrementing
- * keys, where the database assigns an ID upon insertion of a new object (when its ID is `null`).
- *
- * - **When annotating all other types (e.g., `String`, `Int`)**:
- * The property **must** be declared as a non-nullable type (e.g., `String`).
- * This creates a standard, user-provided primary key (such as `TEXT PRIMARY KEY`).
- * You must provide a unique, non-null value for this property upon insertion.
- *
- * @property isAutoincrement Indicates whether to append the `AUTOINCREMENT` keyword to the
- * `INTEGER PRIMARY KEY` column in the `CREATE TABLE` statement. This enables a stricter
- * auto-incrementing strategy that ensures row IDs are never reused.
- * **Important Note**: This parameter is only meaningful when annotating a property of type `Long?`.
- * Setting this to `true` on non-Long properties will result in a compile-time error.
- *
- * @see DBRow
- * @see CompositePrimaryKey
- */
-@Target(AnnotationTarget.PROPERTY)
-@Retention(AnnotationRetention.BINARY)
-public annotation class PrimaryKey(val isAutoincrement: Boolean = false)
-
-/**
- * Marks a property as a part of a composite primary key for the table.
- *
- * This annotation is used to define a primary key that consists of multiple columns.
- * Unlike [PrimaryKey], you can apply this annotation to **multiple properties** within the
- * same [DBRow] class. The combination of all properties marked with [CompositePrimaryKey]
- * will form the table's composite primary key.
- *
- * ### Important Rules
- * - A class can have multiple properties annotated with [CompositePrimaryKey].
- * - If a class uses [CompositePrimaryKey] on any of its properties, it **cannot** also use
- * the [PrimaryKey] annotation on any other property. A table can only have one primary key,
- * which is either a single column or a composite of multiple columns.
- * - All properties annotated with [CompositePrimaryKey] must be of a **non-nullable** type
- * (e.g., `String`, `Int`, `Long`), as primary key columns cannot contain `NULL` values.
- *
- * @see DBRow
- * @see PrimaryKey
- *
- */
-@Target(AnnotationTarget.PROPERTY)
-@Retention(AnnotationRetention.BINARY)
-public annotation class CompositePrimaryKey
-
-/**
- * Marks a text column to use case-insensitive collation in SQLite.
- *
- * This annotation adds the `COLLATE NOCASE` clause to the column definition in the
- * `CREATE TABLE` statement, making string comparisons case-insensitive for this column.
- * This is particularly useful for columns that store user input where case should not
- * affect equality or sorting (e.g., email addresses, usernames).
- *
- * ### Type Restrictions
- * - Can **only** be applied to properties of type `String` or `Char` (and their nullable variants)
- * - Attempting to use this annotation on non-text types will result in a compile-time error
- *
- * ### Example
- * ```kotlin
- * @Serializable
- * @DBRow
- * data class User(
- * @PrimaryKey val id: Long?,
- * @CollateNoCase val email: String, // Case-insensitive email matching
- * val name: String
- * )
- * // Generated SQL: CREATE TABLE User(id INTEGER PRIMARY KEY, email TEXT COLLATE NOCASE, name TEXT)
- * ```
- *
- * ### SQLite Behavior
- * With `COLLATE NOCASE`:
- * - `'ABC' = 'abc'` evaluates to true
- * - `ORDER BY` clauses sort case-insensitively
- * - Indexes on the column are case-insensitive
- *
- * @see DBRow
- * @see Unique
- */
-@Target(AnnotationTarget.PROPERTY)
-@Retention(AnnotationRetention.BINARY)
-public annotation class CollateNoCase
-
-/**
- * Marks a column as unique, enforcing a UNIQUE constraint in the database.
- *
- * This annotation adds the `UNIQUE` keyword to the column definition in the
- * `CREATE TABLE` statement, ensuring that no two rows can have the same value
- * in this column (except for NULL values, which can appear multiple times).
- *
- * ### Single vs. Composite Unique Constraints
- * - Use [Unique] when a **single column** must have unique values
- * - Use [CompositeUnique] when **multiple columns together** must form a unique combination
- *
- * ### Example
- * ```kotlin
- * @Serializable
- * @DBRow
- * data class User(
- * @PrimaryKey val id: Long?,
- * @Unique val email: String, // Each email must be unique
- * @Unique val username: String, // Each username must be unique
- * val age: Int
- * )
- * // Generated SQL: CREATE TABLE User(id INTEGER PRIMARY KEY, email TEXT UNIQUE, username TEXT UNIQUE, age INT)
- * ```
- *
- * ### Nullability Considerations
- * - Multiple NULL values are allowed in a UNIQUE column (NULL is not equal to NULL in SQL)
- * - To prevent NULL values, combine with a non-nullable type: `val email: String`
- *
- * @see DBRow
- * @see CompositeUnique
- * @see CollateNoCase
- */
-@Target(AnnotationTarget.PROPERTY)
-@Retention(AnnotationRetention.BINARY)
-public annotation class Unique
-
-/**
- * Marks a property as part of one or more composite UNIQUE constraints.
- *
- * This annotation allows you to define UNIQUE constraints that span multiple columns.
- * Unlike [Unique], which enforces uniqueness on a single column, [CompositeUnique]
- * ensures that the **combination** of values across multiple columns is unique.
- *
- * ### Grouping
- * Properties can belong to multiple unique constraint groups by specifying different
- * group numbers. Properties with the same group number(s) will be combined into a
- * single composite UNIQUE constraint.
- *
- * ### Example: Single Composite Constraint
- * ```kotlin
- * @Serializable
- * @DBRow
- * data class Enrollment(
- * @PrimaryKey val id: Long?,
- * @CompositeUnique(0) val studentId: Int,
- * @CompositeUnique(0) val courseId: Int,
- * val enrollmentDate: String
- * )
- * // Generated SQL: CREATE TABLE Enrollment(
- * // id INTEGER PRIMARY KEY,
- * // studentId INT,
- * // courseId INT,
- * // enrollmentDate TEXT,
- * // UNIQUE(studentId,courseId)
- * // )
- * // A student cannot enroll in the same course twice
- * ```
- *
- * ### Example: Multiple Composite Constraints
- * ```kotlin
- * @Serializable
- * @DBRow
- * data class Event(
- * @PrimaryKey val id: Long?,
- * @CompositeUnique(0, 1) val userId: Int, // Part of groups 0 and 1
- * @CompositeUnique(0) val eventType: String, // Part of group 0
- * @CompositeUnique(1) val timestamp: Long // Part of group 1
- * )
- * // Generated SQL: CREATE TABLE Event(
- * // id INTEGER PRIMARY KEY,
- * // userId INT,
- * // eventType TEXT,
- * // timestamp BIGINT,
- * // UNIQUE(userId,eventType),
- * // UNIQUE(userId,timestamp)
- * // )
- * ```
- *
- * ### Default Behavior
- * - If no group is specified: `@CompositeUnique()`, defaults to group `0`
- * - All properties with group `0` (explicit or default) form a single composite constraint
- *
- * @property group One or more group numbers (0-based integers) identifying which
- * composite UNIQUE constraint(s) this property belongs to. Properties sharing
- * the same group number are combined into a single `UNIQUE(col1, col2, ...)` clause.
- *
- * @see DBRow
- * @see Unique
- */
-@Target(AnnotationTarget.PROPERTY)
-@Retention(AnnotationRetention.BINARY)
-public annotation class CompositeUnique(vararg val group: Int = [0])
\ No newline at end of file
diff --git a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/CreateStatementModifiers.kt b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/CreateStatementModifiers.kt
new file mode 100644
index 0000000..6f7b655
--- /dev/null
+++ b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/CreateStatementModifiers.kt
@@ -0,0 +1,903 @@
+/*
+ * Copyright (C) 2025 Ctrip.com.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.ctrip.sqllin.dsl.annotation
+
+/**
+ * Modifiers for columns in a table
+ * @author Yuang Qiao
+ */
+
+/**
+ * Marks a property as the primary key for a table within a class annotated with [DBRow].
+ *
+ * This annotation defines how a data model maps to the primary key of a database table.
+ * Within a given `@DBRow` class, **only one** property can be marked with this annotation.
+ * To define a primary key that consists of multiple columns, use the [CompositePrimaryKey] annotation instead.
+ * Additionally, if a property in the class is marked with [PrimaryKey], the class cannot also use the [CompositePrimaryKey] annotation.
+ *
+ * ### Type and Nullability Rules
+ * The behavior of this annotation differs based on the type of property it annotates.
+ * The following rules must be followed:
+ *
+ * - **When annotating a `Long` property**:
+ * The property **must** be declared as a nullable type (`Long?`). This triggers a special
+ * SQLite mechanism, mapping the property to an `INTEGER PRIMARY KEY` column, which acts as
+ * an alias for the database's internal `rowid`. This is typically used for auto-incrementing
+ * keys, where the database assigns an ID upon insertion of a new object (when its ID is `null`).
+ *
+ * - **When annotating all other types (e.g., `String`, `Int`)**:
+ * The property **must** be declared as a non-nullable type (e.g., `String`).
+ * This creates a standard, user-provided primary key (such as `TEXT PRIMARY KEY`).
+ * You must provide a unique, non-null value for this property upon insertion.
+ *
+ * @property isAutoincrement Indicates whether to append the `AUTOINCREMENT` keyword to the
+ * `INTEGER PRIMARY KEY` column in the `CREATE TABLE` statement. This enables a stricter
+ * auto-incrementing strategy that ensures row IDs are never reused.
+ * **Important Note**: This parameter is only meaningful when annotating a property of type `Long?`.
+ * Setting this to `true` on non-Long properties will result in a compile-time error.
+ *
+ * @see DBRow
+ * @see CompositePrimaryKey
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+public annotation class PrimaryKey(val isAutoincrement: Boolean = false)
+
+/**
+ * Marks a property as a part of a composite primary key for the table.
+ *
+ * This annotation is used to define a primary key that consists of multiple columns.
+ * Unlike [PrimaryKey], you can apply this annotation to **multiple properties** within the
+ * same [DBRow] class. The combination of all properties marked with [CompositePrimaryKey]
+ * will form the table's composite primary key.
+ *
+ * ### Important Rules
+ * - A class can have multiple properties annotated with [CompositePrimaryKey].
+ * - If a class uses [CompositePrimaryKey] on any of its properties, it **cannot** also use
+ * the [PrimaryKey] annotation on any other property. A table can only have one primary key,
+ * which is either a single column or a composite of multiple columns.
+ * - All properties annotated with [CompositePrimaryKey] must be of a **non-nullable** type
+ * (e.g., `String`, `Int`, `Long`), as primary key columns cannot contain `NULL` values.
+ *
+ * @see DBRow
+ * @see PrimaryKey
+ *
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+public annotation class CompositePrimaryKey
+
+/**
+ * Marks a text column to use case-insensitive collation in SQLite.
+ *
+ * This annotation adds the `COLLATE NOCASE` clause to the column definition in the
+ * `CREATE TABLE` statement, making string comparisons case-insensitive for this column.
+ * This is particularly useful for columns that store user input where case should not
+ * affect equality or sorting (e.g., email addresses, usernames).
+ *
+ * ### Type Restrictions
+ * - Can **only** be applied to properties of type `String` or `Char` (and their nullable variants)
+ * - Attempting to use this annotation on non-text types will result in a compile-time error
+ *
+ * ### Example
+ * ```kotlin
+ * @Serializable
+ * @DBRow
+ * data class User(
+ * @PrimaryKey val id: Long?,
+ * @CollateNoCase val email: String, // Case-insensitive email matching
+ * val name: String
+ * )
+ * // Generated SQL: CREATE TABLE User(id INTEGER PRIMARY KEY, email TEXT COLLATE NOCASE, name TEXT)
+ * ```
+ *
+ * ### SQLite Behavior
+ * With `COLLATE NOCASE`:
+ * - `'ABC' = 'abc'` evaluates to true
+ * - `ORDER BY` clauses sort case-insensitively
+ * - Indexes on the column are case-insensitive
+ *
+ * @see DBRow
+ * @see Unique
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+public annotation class CollateNoCase
+
+/**
+ * Marks a column as unique, enforcing a UNIQUE constraint in the database.
+ *
+ * This annotation adds the `UNIQUE` keyword to the column definition in the
+ * `CREATE TABLE` statement, ensuring that no two rows can have the same value
+ * in this column (except for NULL values, which can appear multiple times).
+ *
+ * ### Single vs. Composite Unique Constraints
+ * - Use [Unique] when a **single column** must have unique values
+ * - Use [CompositeUnique] when **multiple columns together** must form a unique combination
+ *
+ * ### Example
+ * ```kotlin
+ * @Serializable
+ * @DBRow
+ * data class User(
+ * @PrimaryKey val id: Long?,
+ * @Unique val email: String, // Each email must be unique
+ * @Unique val username: String, // Each username must be unique
+ * val age: Int
+ * )
+ * // Generated SQL: CREATE TABLE User(id INTEGER PRIMARY KEY, email TEXT UNIQUE, username TEXT UNIQUE, age INT)
+ * ```
+ *
+ * ### Nullability Considerations
+ * - Multiple NULL values are allowed in a UNIQUE column (NULL is not equal to NULL in SQL)
+ * - To prevent NULL values, combine with a non-nullable type: `val email: String`
+ *
+ * @see DBRow
+ * @see CompositeUnique
+ * @see CollateNoCase
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+public annotation class Unique
+
+/**
+ * Marks a property as part of one or more composite UNIQUE constraints.
+ *
+ * This annotation allows you to define UNIQUE constraints that span multiple columns.
+ * Unlike [Unique], which enforces uniqueness on a single column, [CompositeUnique]
+ * ensures that the **combination** of values across multiple columns is unique.
+ *
+ * ### Grouping
+ * Properties can belong to multiple unique constraint groups by specifying different
+ * group numbers. Properties with the same group number(s) will be combined into a
+ * single composite UNIQUE constraint.
+ *
+ * ### Example: Single Composite Constraint
+ * ```kotlin
+ * @Serializable
+ * @DBRow
+ * data class Enrollment(
+ * @PrimaryKey val id: Long?,
+ * @CompositeUnique(0) val studentId: Int,
+ * @CompositeUnique(0) val courseId: Int,
+ * val enrollmentDate: String
+ * )
+ * // Generated SQL: CREATE TABLE Enrollment(
+ * // id INTEGER PRIMARY KEY,
+ * // studentId INT,
+ * // courseId INT,
+ * // enrollmentDate TEXT,
+ * // UNIQUE(studentId,courseId)
+ * // )
+ * // A student cannot enroll in the same course twice
+ * ```
+ *
+ * ### Example: Multiple Composite Constraints
+ * ```kotlin
+ * @Serializable
+ * @DBRow
+ * data class Event(
+ * @PrimaryKey val id: Long?,
+ * @CompositeUnique(0, 1) val userId: Int, // Part of groups 0 and 1
+ * @CompositeUnique(0) val eventType: String, // Part of group 0
+ * @CompositeUnique(1) val timestamp: Long // Part of group 1
+ * )
+ * // Generated SQL: CREATE TABLE Event(
+ * // id INTEGER PRIMARY KEY,
+ * // userId INT,
+ * // eventType TEXT,
+ * // timestamp BIGINT,
+ * // UNIQUE(userId,eventType),
+ * // UNIQUE(userId,timestamp)
+ * // )
+ * ```
+ *
+ * ### Default Behavior
+ * - If no group is specified: `@CompositeUnique()`, defaults to group `0`
+ * - All properties with group `0` (explicit or default) form a single composite constraint
+ *
+ * @property group One or more group numbers (0-based integers) identifying which
+ * composite UNIQUE constraint(s) this property belongs to. Properties sharing
+ * the same group number are combined into a single `UNIQUE(col1, col2, ...)` clause.
+ *
+ * @see DBRow
+ * @see Unique
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+public annotation class CompositeUnique(vararg val group: Int = [0])
+
+/**
+ * Defines a table-level foreign key constraint that references another table.
+ *
+ * This annotation is applied at the **class level** and works together with [@ForeignKey]
+ * annotations on individual properties to create multi-column foreign key constraints.
+ * Use this when you need to reference multiple columns in a parent table.
+ *
+ * ### When to Use
+ * - **Single-column foreign key**: Use [@References] on the property instead
+ * - **Multi-column foreign key**: Use @ForeignKeyGroup at class level + [@ForeignKey] on each property
+ *
+ * ### How It Works
+ * 1. Add @ForeignKeyGroup annotation(s) to your class, each with a unique group number
+ * 2. Mark properties with [@ForeignKey], specifying which group they belong to
+ * 3. Properties in the same group form a composite foreign key constraint
+ *
+ * ### Example: Single Foreign Key
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(
+ * group = 0,
+ * tableName = "User",
+ * trigger = Trigger.ON_DELETE_CASCADE
+ * )
+ * data class Order(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "id")
+ * val userId: Long,
+ * val orderDate: String
+ * )
+ * // Generated SQL: CREATE TABLE Order(
+ * // id INTEGER PRIMARY KEY,
+ * // userId BIGINT,
+ * // orderDate TEXT,
+ * // FOREIGN KEY (userId) REFERENCES User(id) ON DELETE CASCADE
+ * // )
+ * ```
+ *
+ * ### Example: Composite Foreign Key
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(
+ * group = 0,
+ * tableName = "Product",
+ * trigger = Trigger.ON_DELETE_CASCADE,
+ * constraintName = "fk_product"
+ * )
+ * data class OrderItem(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "categoryId")
+ * val productCategory: Int,
+ * @ForeignKey(group = 0, reference = "productCode")
+ * val productCode: String,
+ * val quantity: Int
+ * )
+ * // Generated SQL: CREATE TABLE OrderItem(
+ * // id INTEGER PRIMARY KEY,
+ * // productCategory INT,
+ * // productCode TEXT,
+ * // quantity INT,
+ * // CONSTRAINT fk_product FOREIGN KEY (productCategory,productCode)
+ * // REFERENCES Product(categoryId,productCode) ON DELETE CASCADE
+ * // )
+ * ```
+ *
+ * ### Multiple Foreign Keys
+ * This annotation is repeatable, so you can define multiple foreign key constraints
+ * by using different group numbers:
+ *
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(group = 0, tableName = "User", trigger = Trigger.ON_DELETE_CASCADE)
+ * @ForeignKeyGroup(group = 1, tableName = "Product", trigger = Trigger.ON_DELETE_RESTRICT)
+ * data class OrderItem(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "id") val userId: Long,
+ * @ForeignKey(group = 1, reference = "id") val productId: Long,
+ * val quantity: Int
+ * )
+ * ```
+ *
+ * ### Important Notes
+ * - **Enable foreign keys**: Use `PRAGMA_FOREIGN_KEYS(true)` before creating tables, as SQLite
+ * disables foreign key enforcement by default
+ * - **Order matters**: The order of [@ForeignKey] properties must match the order of referenced
+ * columns in the parent table
+ * - **Non-null constraint**: Properties marked with [@ForeignKey] and triggers like SET NULL must
+ * be nullable, otherwise a compilation error will occur
+ *
+ * @property group A unique integer identifier for this foreign key group (must be unique within the class)
+ * @property tableName The name of the parent table being referenced (cannot be blank)
+ * @property trigger The action to take when the referenced row is deleted or updated
+ * @property constraintName Optional name for the constraint (appears in error messages and schema introspection)
+ *
+ * @see ForeignKey
+ * @see References
+ * @see Trigger
+ * @see com.ctrip.sqllin.dsl.DatabaseScope.PRAGMA_FOREIGN_KEYS
+ */
+@Target(AnnotationTarget.CLASS)
+@Retention(AnnotationRetention.BINARY)
+@Repeatable
+public annotation class ForeignKeyGroup(
+ val group: Int,
+ val tableName: String,
+ val trigger: Trigger = Trigger.NULL,
+ val constraintName: String = "",
+)
+
+/**
+ * Defines a column-level foreign key constraint that references one or more columns in another table.
+ *
+ * This annotation is applied directly to a property and creates an inline foreign key constraint
+ * for that column. Use this for simple, single-property foreign keys. For composite foreign keys
+ * involving multiple columns, use [@ForeignKeyGroup] and [@ForeignKey] instead.
+ *
+ * ### When to Use
+ * - **Single-column foreign key**: Use @References on the property (recommended for simplicity)
+ * - **Multi-column foreign key**: Use [@ForeignKeyGroup] at class level + [@ForeignKey] on each property
+ *
+ * ### Example: Simple Foreign Key
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Order(
+ * @PrimaryKey val id: Long?,
+ * @References(
+ * tableName = "User",
+ * foreignKeys = ["id"],
+ * trigger = Trigger.ON_DELETE_CASCADE
+ * )
+ * val userId: Long,
+ * val orderDate: String
+ * )
+ * // Generated SQL: CREATE TABLE Order(
+ * // id INTEGER PRIMARY KEY,
+ * // userId BIGINT REFERENCES User(id) ON DELETE CASCADE,
+ * // orderDate TEXT
+ * // )
+ * ```
+ *
+ * ### Example: Multi-Column Reference
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class OrderItem(
+ * @PrimaryKey val id: Long?,
+ * @References(
+ * tableName = "Product",
+ * foreignKeys = ["categoryId", "productCode"],
+ * trigger = Trigger.ON_DELETE_RESTRICT,
+ * constraintName = "fk_product"
+ * )
+ * val productCompositeKey: String, // This single column references multiple columns
+ * val quantity: Int
+ * )
+ * // Generated SQL: CREATE TABLE OrderItem(
+ * // id INTEGER PRIMARY KEY,
+ * // productCompositeKey TEXT CONSTRAINT fk_product
+ * // REFERENCES Product(categoryId,productCode) ON DELETE RESTRICT,
+ * // quantity INT
+ * // )
+ * ```
+ *
+ * ### Example: Named Constraint
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Comment(
+ * @PrimaryKey val id: Long?,
+ * @References(
+ * tableName = "User",
+ * foreignKeys = ["id"],
+ * trigger = Trigger.ON_DELETE_SET_NULL,
+ * constraintName = "fk_comment_author"
+ * )
+ * val authorId: Long?, // Must be nullable when using ON_DELETE_SET_NULL
+ * val content: String
+ * )
+ * // Generated SQL: CREATE TABLE Comment(
+ * // id INTEGER PRIMARY KEY,
+ * // authorId BIGINT CONSTRAINT fk_comment_author
+ * // REFERENCES User(id) ON DELETE SET NULL,
+ * // content TEXT
+ * // )
+ * ```
+ *
+ * ### Repeatable Usage
+ * This annotation is repeatable, allowing you to apply multiple @References to the same property.
+ * This is useful when a single column needs to reference different tables based on context:
+ *
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class AuditLog(
+ * @PrimaryKey val id: Long?,
+ * @References(tableName = "User", foreignKeys = ["id"], constraintName = "fk_created_by")
+ * @References(tableName = "Admin", foreignKeys = ["id"], constraintName = "fk_approved_by")
+ * val performedBy: Long, // Can reference either User or Admin table
+ * val action: String
+ * )
+ * // Generated SQL: CREATE TABLE AuditLog(
+ * // id INTEGER PRIMARY KEY,
+ * // performedBy BIGINT
+ * // CONSTRAINT fk_created_by REFERENCES User(id)
+ * // CONSTRAINT fk_approved_by REFERENCES Admin(id),
+ * // action TEXT
+ * // )
+ * ```
+ *
+ * ### Important Notes
+ * - **Enable foreign keys**: Use `PRAGMA_FOREIGN_KEYS(true)` before creating tables, as SQLite
+ * disables foreign key enforcement by default
+ * - **Nullability with SET NULL triggers**: If using `Trigger.ON_DELETE_SET_NULL` or
+ * `Trigger.ON_UPDATE_SET_NULL`, the annotated property must be nullable (e.g., `Long?`)
+ * - **Referenced columns must exist**: The columns specified in `foreignKeys` must exist in
+ * the referenced table
+ *
+ * @property tableName The name of the parent table being referenced (cannot be blank or empty)
+ * @property foreignKeys Array of column names in the parent table to reference (cannot be empty)
+ * @property trigger The action to take when the referenced row is deleted or updated (defaults to no action)
+ * @property constraintName Optional name for the constraint (useful for error messages and debugging)
+ *
+ * @see ForeignKeyGroup
+ * @see ForeignKey
+ * @see Trigger
+ * @see com.ctrip.sqllin.dsl.DatabaseScope.PRAGMA_FOREIGN_KEYS
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+@Repeatable
+public annotation class References(
+ val tableName: String,
+ val trigger: Trigger = Trigger.NULL,
+ val constraintName: String = "",
+ vararg val foreignKeys: String,
+)
+
+/**
+ * Marks a property as part of a table-level foreign key constraint defined by [@ForeignKeyGroup].
+ *
+ * This annotation is used in conjunction with [@ForeignKeyGroup] to create foreign key constraints
+ * that span one or more columns. Each property annotated with @ForeignKey must specify which
+ * foreign key group it belongs to and which column in the parent table it references.
+ *
+ * ### When to Use
+ * - **Single-column foreign key**: Use [@References] on the property instead (simpler)
+ * - **Multi-column foreign key**: Use @ForeignKeyGroup at class level + @ForeignKey on each property
+ *
+ * ### How It Works
+ * 1. Define one or more [@ForeignKeyGroup] annotations at the class level
+ * 2. Mark each participating property with @ForeignKey, specifying:
+ * - `group`: Which [@ForeignKeyGroup] this property belongs to
+ * - `reference`: The column name in the parent table that this property references
+ *
+ * ### Example: Single Foreign Key (via Group)
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(
+ * group = 0,
+ * tableName = "User",
+ * trigger = Trigger.ON_DELETE_CASCADE
+ * )
+ * data class Order(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "id")
+ * val userId: Long,
+ * val orderDate: String
+ * )
+ * // Generated SQL: CREATE TABLE Order(
+ * // id INTEGER PRIMARY KEY,
+ * // userId BIGINT,
+ * // orderDate TEXT,
+ * // FOREIGN KEY (userId) REFERENCES User(id) ON DELETE CASCADE
+ * // )
+ * ```
+ *
+ * ### Example: Composite Foreign Key
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(
+ * group = 0,
+ * tableName = "Product",
+ * trigger = Trigger.ON_DELETE_CASCADE
+ * )
+ * data class OrderItem(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "categoryId")
+ * val productCategory: Int,
+ * @ForeignKey(group = 0, reference = "productCode")
+ * val productCode: String,
+ * val quantity: Int
+ * )
+ * // Generated SQL: CREATE TABLE OrderItem(
+ * // id INTEGER PRIMARY KEY,
+ * // productCategory INT,
+ * // productCode TEXT,
+ * // quantity INT,
+ * // FOREIGN KEY (productCategory,productCode)
+ * // REFERENCES Product(categoryId,productCode) ON DELETE CASCADE
+ * // )
+ * ```
+ *
+ * ### Example: Multiple Foreign Keys
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(group = 0, tableName = "User", trigger = Trigger.ON_DELETE_CASCADE)
+ * @ForeignKeyGroup(group = 1, tableName = "Product", trigger = Trigger.ON_DELETE_RESTRICT)
+ * data class OrderItem(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "id") val userId: Long,
+ * @ForeignKey(group = 1, reference = "id") val productId: Long,
+ * val quantity: Int
+ * )
+ * // Generated SQL: CREATE TABLE OrderItem(
+ * // id INTEGER PRIMARY KEY,
+ * // userId BIGINT,
+ * // productId BIGINT,
+ * // quantity INT,
+ * // FOREIGN KEY (userId) REFERENCES User(id) ON DELETE CASCADE,
+ * // FOREIGN KEY (productId) REFERENCES Product(id) ON DELETE RESTRICT
+ * // )
+ * ```
+ *
+ * ### Repeatable Usage
+ * This annotation is repeatable, allowing a single property to participate in multiple
+ * foreign key constraints. This is useful for composite keys that reference different tables:
+ *
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(group = 0, tableName = "Department", trigger = Trigger.ON_DELETE_CASCADE)
+ * @ForeignKeyGroup(group = 1, tableName = "Location", trigger = Trigger.ON_DELETE_RESTRICT)
+ * data class Employee(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "deptId")
+ * @ForeignKey(group = 1, reference = "locId")
+ * val organizationId: Int, // Used in both foreign keys
+ * @ForeignKey(group = 0, reference = "deptName") val deptName: String,
+ * @ForeignKey(group = 1, reference = "locCode") val locCode: String
+ * )
+ * // Generated SQL: CREATE TABLE Employee(
+ * // id INTEGER PRIMARY KEY,
+ * // organizationId INT,
+ * // deptName TEXT,
+ * // locCode TEXT,
+ * // FOREIGN KEY (organizationId,deptName) REFERENCES Department(deptId,deptName) ON DELETE CASCADE,
+ * // FOREIGN KEY (organizationId,locCode) REFERENCES Location(locId,locCode) ON DELETE RESTRICT
+ * // )
+ * ```
+ *
+ * ### Important Notes
+ * - **Corresponding group must exist**: The `group` number must match a [@ForeignKeyGroup] defined at the class level
+ * - **Reference column must exist**: The `reference` must be a valid column name in the parent table
+ * - **Order matters for composite keys**: When multiple properties belong to the same group, their
+ * order in the class determines the order in the FOREIGN KEY clause
+ * - **Nullability with SET NULL triggers**: If the [@ForeignKeyGroup] uses `ON_DELETE_SET_NULL` or
+ * `ON_UPDATE_SET_NULL`, all properties in that group must be nullable
+ * - **Enable foreign keys**: Use `PRAGMA_FOREIGN_KEYS(true)` before creating tables
+ *
+ * @property group The foreign key group number (must match a [@ForeignKeyGroup] annotation)
+ * @property reference The column name in the parent table that this property references (cannot be blank)
+ *
+ * @see ForeignKeyGroup
+ * @see References
+ * @see Trigger
+ * @see com.ctrip.sqllin.dsl.DatabaseScope.PRAGMA_FOREIGN_KEYS
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+@Repeatable
+public annotation class ForeignKey(
+ val group: Int,
+ val reference: String,
+)
+
+/**
+ * Defines referential actions (triggers) for foreign key constraints in SQLite.
+ *
+ * These triggers specify what action SQLite should take when a referenced row in the
+ * parent table is deleted or updated. By default, SQLite performs no action (NULL).
+ *
+ * ### Trigger Types
+ *
+ * #### DELETE Triggers
+ * - **ON_DELETE_CASCADE**: When a parent row is deleted, automatically delete all child rows
+ * - **ON_DELETE_SET_NULL**: When a parent row is deleted, set the foreign key column(s) to NULL
+ * - **ON_DELETE_SET_DEFAULT**: When a parent row is deleted, set the foreign key column(s) to their default value
+ * - **ON_DELETE_RESTRICT**: Prevent deletion of a parent row if child rows exist
+ *
+ * #### UPDATE Triggers
+ * - **ON_UPDATE_CASCADE**: When a parent row's primary key is updated, update all child rows' foreign keys
+ * - **ON_UPDATE_SET_NULL**: When a parent row's primary key is updated, set child foreign keys to NULL
+ * - **ON_UPDATE_SET_DEFAULT**: When a parent row's primary key is updated, set child foreign keys to their default
+ * - **ON_UPDATE_RESTRICT**: Prevent updating a parent row's primary key if child rows exist
+ *
+ * ### Example: CASCADE Delete
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Order(
+ * @PrimaryKey val id: Long?,
+ * @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ * val userId: Long,
+ * val amount: Double
+ * )
+ * // When a User is deleted, all their Orders are automatically deleted
+ * ```
+ *
+ * ### Example: SET NULL on Delete
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Post(
+ * @PrimaryKey val id: Long?,
+ * @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_NULL)
+ * val authorId: Long?, // Must be nullable!
+ * val content: String
+ * )
+ * // When a User is deleted, their Posts remain but authorId becomes NULL
+ * ```
+ *
+ * ### Example: RESTRICT Delete
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class OrderItem(
+ * @PrimaryKey val id: Long?,
+ * @References(tableName = "Order", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_RESTRICT)
+ * val orderId: Long,
+ * val productId: Long
+ * )
+ * // An Order cannot be deleted if it has OrderItems
+ * ```
+ *
+ * ### Important Notes
+ * - **SET_NULL requires nullable columns**: When using `ON_DELETE_SET_NULL` or `ON_UPDATE_SET_NULL`,
+ * the annotated property **must** be nullable (e.g., `Long?`, `String?`)
+ * - **SET_DEFAULT requires default values**: SQLite requires a DEFAULT clause in the column definition
+ * - **RESTRICT vs no trigger**: RESTRICT explicitly prevents the operation; NULL (default) allows it
+ * - **Enable foreign keys**: Foreign key enforcement must be enabled via `PRAGMA_FOREIGN_KEYS(true)`
+ *
+ * ### SQLite Behavior Summary
+ *
+ * | Trigger | Parent Deleted/Updated | Child Behavior |
+ * |---------|------------------------|----------------|
+ * | NULL (default) | Allowed | No change |
+ * | CASCADE | Allowed | Child rows deleted/updated |
+ * | SET_NULL | Allowed | Foreign key set to NULL |
+ * | SET_DEFAULT | Allowed | Foreign key set to DEFAULT |
+ * | RESTRICT | **Prevented** | Operation fails |
+ *
+ * @see ForeignKeyGroup
+ * @see ForeignKey
+ * @see References
+ * @see com.ctrip.sqllin.dsl.DatabaseScope.PRAGMA_FOREIGN_KEYS
+ */
+public enum class Trigger {
+ /**
+ * No action is taken when the parent row is deleted or updated.
+ * This is the default behavior if no trigger is specified.
+ */
+ NULL,
+
+ /**
+ * When a parent row is deleted, all child rows that reference it are automatically deleted.
+ * This maintains referential integrity by removing orphaned child records.
+ */
+ ON_DELETE_CASCADE,
+
+ /**
+ * When a parent row is deleted, the foreign key column(s) in child rows are set to NULL.
+ * **Requires the foreign key column(s) to be nullable.**
+ */
+ ON_DELETE_SET_NULL,
+
+ /**
+ * When a parent row is deleted, the foreign key column(s) in child rows are set to their default value.
+ * **Requires the column to have a DEFAULT constraint defined.**
+ */
+ ON_DELETE_SET_DEFAULT,
+
+ /**
+ * Prevents deletion of a parent row if any child rows reference it.
+ * The DELETE operation will fail with a constraint violation error.
+ */
+ ON_DELETE_RESTRICT,
+
+ /**
+ * When a parent row's primary key is updated, all child rows' foreign keys are updated to match.
+ * This maintains referential integrity automatically.
+ */
+ ON_UPDATE_CASCADE,
+
+ /**
+ * When a parent row's primary key is updated, the foreign key column(s) in child rows are set to NULL.
+ * **Requires the foreign key column(s) to be nullable.**
+ */
+ ON_UPDATE_SET_NULL,
+
+ /**
+ * When a parent row's primary key is updated, the foreign key column(s) in child rows are set to their default value.
+ * **Requires the column to have a DEFAULT constraint defined.**
+ */
+ ON_UPDATE_SET_DEFAULT,
+
+ /**
+ * Prevents updating a parent row's primary key if any child rows reference it.
+ * The UPDATE operation will fail with a constraint violation error.
+ */
+ ON_UPDATE_RESTRICT,
+}
+
+/**
+ * Specifies a default value for a column in SQLite CREATE TABLE statements.
+ *
+ * This annotation adds a DEFAULT clause to the column definition, which SQLite uses
+ * to automatically populate the column when a new row is inserted without explicitly
+ * providing a value for this column. Default values are also critical for foreign key
+ * constraints that use `ON DELETE SET DEFAULT` or `ON UPDATE SET DEFAULT` triggers.
+ *
+ * ### When to Use
+ * - To provide fallback values for optional columns
+ * - To ensure columns have sensible defaults when not specified
+ * - When using `Trigger.ON_DELETE_SET_DEFAULT` or `Trigger.ON_UPDATE_SET_DEFAULT` in foreign keys
+ * - To simplify INSERT operations by reducing required fields
+ *
+ * ### Value Format
+ * The `value` parameter must be a valid SQLite literal expression:
+ * - **Strings**: Must be enclosed in single quotes: `'default text'`
+ * - **Numbers**: Plain numeric literals: `0`, `42`, `3.14`
+ * - **Booleans**: Use `0` for false or `1` for true
+ * - **NULL**: Use the literal `NULL` (though this is rarely needed for nullable columns)
+ * - **Expressions**: SQLite functions like `CURRENT_TIMESTAMP`, `datetime('now')`, etc.
+ *
+ * ### Example: Basic Default Values
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class User(
+ * @PrimaryKey val id: Long?,
+ * val name: String,
+ * @Default("'active'") val status: String, // String default
+ * @Default("0") val loginCount: Int, // Numeric default
+ * @Default("1") val isEnabled: Boolean, // Boolean default (1 = true)
+ * @Default("CURRENT_TIMESTAMP") val createdAt: String // SQLite function
+ * )
+ * // Generated SQL:
+ * // CREATE TABLE User(
+ * // id INTEGER PRIMARY KEY,
+ * // name TEXT NOT NULL,
+ * // status TEXT NOT NULL DEFAULT 'active',
+ * // loginCount INT NOT NULL DEFAULT 0,
+ * // isEnabled INT NOT NULL DEFAULT 1,
+ * // createdAt TEXT NOT NULL DEFAULT CURRENT_TIMESTAMP
+ * // )
+ * ```
+ *
+ * ### Example: With Foreign Key SET DEFAULT Trigger
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Order(
+ * @PrimaryKey val id: Long?,
+ * @References(
+ * tableName = "User",
+ * foreignKeys = ["id"],
+ * trigger = Trigger.ON_DELETE_SET_DEFAULT
+ * )
+ * @Default("0") // REQUIRED when using ON_DELETE_SET_DEFAULT
+ * val userId: Long,
+ * val amount: Double
+ * )
+ * // Generated SQL:
+ * // CREATE TABLE Order(
+ * // id INTEGER PRIMARY KEY,
+ * // userId BIGINT NOT NULL DEFAULT 0 REFERENCES User(id) ON DELETE SET DEFAULT,
+ * // amount REAL NOT NULL
+ * // )
+ * // When a User is deleted, their Orders' userId becomes 0
+ * ```
+ *
+ * ### Example: Nullable Column with Default
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Product(
+ * @PrimaryKey val id: Long?,
+ * val name: String,
+ * @Default("'In Stock'") val availability: String?,
+ * @Default("100") val quantity: Int?
+ * )
+ * // Generated SQL:
+ * // CREATE TABLE Product(
+ * // id INTEGER PRIMARY KEY,
+ * // name TEXT NOT NULL,
+ * // availability TEXT DEFAULT 'In Stock',
+ * // quantity INT DEFAULT 100
+ * // )
+ * ```
+ *
+ * ### Example: Using SQLite Functions
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Event(
+ * @PrimaryKey val id: Long?,
+ * val name: String,
+ * @Default("datetime('now')") val timestamp: String,
+ * @Default("(random())") val randomId: Long
+ * )
+ * // Generated SQL:
+ * // CREATE TABLE Event(
+ * // id INTEGER PRIMARY KEY,
+ * // name TEXT NOT NULL,
+ * // timestamp TEXT NOT NULL DEFAULT datetime('now'),
+ * // randomId BIGINT NOT NULL DEFAULT (random())
+ * // )
+ * ```
+ *
+ * ### Important Notes
+ * - **String values must use single quotes**: `'text'`, not `"text"`
+ * - **No type validation**: The annotation processor doesn't verify that the default value
+ * matches the column type - SQLite will handle type coercion or raise runtime errors
+ * - **Expressions are passed as-is**: Complex expressions like `(random())` or
+ * `datetime('now', 'localtime')` are valid
+ * - **Required for SET_DEFAULT triggers**: When using `ON_DELETE_SET_DEFAULT` or
+ * `ON_UPDATE_SET_DEFAULT` triggers on foreign keys, the column **must** have a default
+ * value or be nullable
+ *
+ * ### Common Pitfalls
+ *
+ * #### Wrong: Using double quotes for strings
+ * ```kotlin
+ * @Default("\"active\"") // ❌ Wrong - SQLite uses single quotes
+ * val status: String
+ * ```
+ *
+ * #### Correct: Using single quotes for strings
+ * ```kotlin
+ * @Default("'active'") // ✅ Correct
+ * val status: String
+ * ```
+ *
+ * #### Wrong: Forgetting default with SET_DEFAULT trigger
+ * ```kotlin
+ * @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_DEFAULT)
+ * val userId: Long // ❌ Compile error - needs @Default or must be nullable
+ * ```
+ *
+ * #### Correct: Adding default value
+ * ```kotlin
+ * @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_SET_DEFAULT)
+ * @Default("0")
+ * val userId: Long // ✅ Correct
+ * ```
+ *
+ * ### SQLite Behavior
+ * - Default values are evaluated **once** when the CREATE TABLE statement is executed
+ * - Functions like `CURRENT_TIMESTAMP` are evaluated **at insertion time**, not at table creation
+ * - Default values don't override explicitly provided values in INSERT statements
+ * - If a column has both DEFAULT and NOT NULL, you can omit it in INSERT (it won't be NULL)
+ *
+ * @property value The SQLite default value expression (e.g., `'text'`, `0`, `CURRENT_TIMESTAMP`)
+ *
+ * @see References
+ * @see ForeignKeyGroup
+ * @see Trigger.ON_DELETE_SET_DEFAULT
+ * @see Trigger.ON_UPDATE_SET_DEFAULT
+ * @see DBRow
+ */
+@Target(AnnotationTarget.PROPERTY)
+@Retention(AnnotationRetention.BINARY)
+public annotation class Default(val value: String)
\ No newline at end of file
diff --git a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/DslMaker.kt b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/DslMakers.kt
similarity index 100%
rename from sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/DslMaker.kt
rename to sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/annotation/DslMakers.kt
diff --git a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/clause/Function.kt b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/clause/Function.kt
index aa31304..d028707 100644
--- a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/clause/Function.kt
+++ b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/clause/Function.kt
@@ -46,13 +46,27 @@ public fun Table.count(element: ClauseElement): ClauseNumber =
*
* Usage:
* ```kotlin
- * SELECT(user) WHERE (count(X) GT 100)
+ * SELECT(user) WHERE (count(*) GT 100)
* ```
*/
@FunctionDslMaker
public fun Table.count(x: X): ClauseNumber =
ClauseNumber("count(*)", this, true)
+/**
+ * AVG aggregate function - returns average value.
+ */
+@FunctionDslMaker
+public fun Table.avg(element: ClauseElement): ClauseNumber =
+ ClauseNumber("avg(${element.valueName})", this, true)
+
+/**
+ * SUM aggregate function - returns sum of values.
+ */
+@FunctionDslMaker
+public fun Table.sum(element: ClauseElement): ClauseNumber =
+ ClauseNumber("sum(${element.valueName})", this, true)
+
/**
* MAX aggregate function - returns maximum value.
*/
@@ -68,43 +82,252 @@ public fun Table.min(element: ClauseElement): ClauseNumber =
ClauseNumber("min(${element.valueName})", this, true)
/**
- * AVG aggregate function - returns average value.
+ * GROUP_CONCAT aggregate function - concatenates all non-NULL values in a group with a separator.
+ *
+ * Returns a string which is the concatenation of all non-NULL values of the specified column.
+ * If there are no non-NULL values, the result is NULL.
+ *
+ * Example:
+ * ```kotlin
+ * // Concatenate all user names with comma separator
+ * SELECT(group_concat(User::name, ","))
+ * ```
+ *
+ * @param element The string column to concatenate
+ * @param infix The separator string to use between values
+ * @return ClauseString representing the concatenated result
*/
@FunctionDslMaker
-public fun Table.avg(element: ClauseElement): ClauseNumber =
- ClauseNumber("avg(${element.valueName})", this, true)
+public fun Table.group_concat(element: ClauseString, infix: String): ClauseString =
+ ClauseString("group_concat(${element.valueName},'$infix')", this, true)
/**
- * SUM aggregate function - returns sum of values.
+ * ABS scalar function - returns absolute value.
*/
@FunctionDslMaker
-public fun Table.sum(element: ClauseElement): ClauseNumber =
- ClauseNumber("sum(${element.valueName})", this, true)
+public fun Table.abs(element: ClauseNumber): ClauseNumber =
+ ClauseNumber("abs(${element.valueName})", this, true)
/**
- * ABS scalar function - returns absolute value.
+ * ROUND scalar function - rounds a number to a specified number of decimal places.
+ *
+ * Rounds the numeric value to the specified number of digits after the decimal point.
+ * If digits is negative, rounding occurs to the left of the decimal point.
+ *
+ * Example:
+ * ```kotlin
+ * // Round price to 2 decimal places
+ * SELECT WHERE (round(Product::price, 2) EQ 19.99)
+ * ```
+ *
+ * @param element The numeric value to round
+ * @param digits The number of decimal places to round to
+ * @return ClauseNumber representing the rounded value
*/
@FunctionDslMaker
-public fun Table.abs(number: ClauseElement): ClauseNumber =
- ClauseNumber("abs(${number.valueName})", this, true)
+public fun Table.round(element: ClauseNumber, digits: Int): ClauseNumber =
+ ClauseNumber("round(${element.valueName},$digits)", this, true)
+
+/**
+ * RANDOM scalar function - returns a pseudo-random integer.
+ *
+ * Returns a pseudo-random integer between -9223372036854775808 and +9223372036854775807.
+ *
+ * Example:
+ * ```kotlin
+ * // Select random records
+ * SELECT ORDER_BY(random()) LIMIT 10
+ * ```
+ *
+ * @return ClauseNumber representing the random integer
+ */
+@FunctionDslMaker
+public fun Table.random(): ClauseNumber =
+ ClauseNumber("random()", this, true)
+
+/**
+ * SIGN scalar function - returns the sign of a number.
+ *
+ * Returns -1, 0, or +1 if the argument is negative, zero, or positive respectively.
+ * If the argument is NULL, then NULL is returned.
+ *
+ * Example:
+ * ```kotlin
+ * // Get the sign of balance
+ * SELECT WHERE (sign(Account::balance) EQ 1)
+ * ```
+ * ***It is based on SQLite 3.51.1, disabled it temporarily***
+ *
+ * @param element The numeric value to get the sign of
+ * @return ClauseNumber representing -1, 0, or 1
+ */
+/* @FunctionDslMaker
+ public fun Table.sign(element: ClauseNumber): ClauseNumber =
+ ClauseNumber("sign(${element.valueName})", this, true) */
/**
* UPPER scalar function - converts string to uppercase.
*/
@FunctionDslMaker
-public fun Table.upper(element: ClauseElement): ClauseString =
+public fun Table.upper(element: ClauseString): ClauseString =
ClauseString("upper(${element.valueName})", this, true)
/**
* LOWER scalar function - converts string to lowercase.
*/
@FunctionDslMaker
-public fun Table.lower(element: ClauseElement): ClauseString =
+public fun Table.lower(element: ClauseString): ClauseString =
ClauseString("lower(${element.valueName})", this, true)
/**
* LENGTH scalar function - returns string/blob length in bytes.
*/
@FunctionDslMaker
-public fun Table.length(element: ClauseElement): ClauseNumber =
+public fun Table.length(element: ClauseString): ClauseNumber =
ClauseNumber("length(${element.valueName})", this, true)
+
+/**
+ * LENGTH scalar function - returns the length of a BLOB in bytes.
+ *
+ * For BLOBs, returns the number of bytes in the blob.
+ *
+ * Example:
+ * ```kotlin
+ * // Get the size of an image blob
+ * SELECT WHERE (length(Image::data) GT 1024)
+ * ```
+ *
+ * @param element The BLOB column to measure
+ * @return ClauseNumber representing the length in bytes
+ */
+@FunctionDslMaker
+public fun Table.length(element: ClauseBlob): ClauseNumber =
+ ClauseNumber("length(${element.valueName})", this, true)
+
+/**
+ * SUBSTR scalar function - extracts a substring from a string.
+ *
+ * Returns a substring starting at position `start` with length `len`.
+ * In SQLite, the first character has index 1 (not 0).
+ *
+ * Example:
+ * ```kotlin
+ * // Extract first 5 characters
+ * SELECT WHERE (substr(User::name, 1, 5) EQ "Alice")
+ * ```
+ *
+ * @param element The string to extract from
+ * @param start The starting position (1-indexed)
+ * @param len The length of the substring to extract
+ * @return ClauseString representing the extracted substring
+ */
+public fun Table.substr(element: ClauseString, start: Int, len: Int): ClauseString =
+ ClauseString("substr(${element.valueName},$start,$len)", this, true)
+
+/**
+ * TRIM scalar function - removes leading and trailing whitespace from a string.
+ *
+ * Removes spaces from both ends of the string.
+ *
+ * Example:
+ * ```kotlin
+ * // Remove whitespace from names
+ * SELECT(trim(User::name))
+ * ```
+ *
+ * @param element The string to trim
+ * @return ClauseString with whitespace removed from both ends
+ */
+public fun Table.trim(element: ClauseString): ClauseString =
+ ClauseString("trim(${element.valueName})", this, true)
+
+/**
+ * LTRIM scalar function - removes leading (left) whitespace from a string.
+ *
+ * Removes spaces from the beginning of the string only.
+ *
+ * Example:
+ * ```kotlin
+ * // Remove leading whitespace
+ * SELECT(ltrim(User::name))
+ * ```
+ *
+ * @param element The string to trim
+ * @return ClauseString with leading whitespace removed
+ */
+public fun Table.ltrim(element: ClauseString): ClauseString =
+ ClauseString("ltrim(${element.valueName})", this, true)
+
+/**
+ * RTRIM scalar function - removes trailing (right) whitespace from a string.
+ *
+ * Removes spaces from the end of the string only.
+ *
+ * Example:
+ * ```kotlin
+ * // Remove trailing whitespace
+ * SELECT(rtrim(User::name))
+ * ```
+ *
+ * @param element The string to trim
+ * @return ClauseString with trailing whitespace removed
+ */
+public fun Table.rtrim(element: ClauseString): ClauseString =
+ ClauseString("rtrim(${element.valueName})", this, true)
+
+/**
+ * REPLACE scalar function - replaces all occurrences of a substring with another string.
+ *
+ * Returns a copy of the string with all occurrences of `old` replaced by `new`.
+ *
+ * Example:
+ * ```kotlin
+ * // Replace dots with dashes in email
+ * SELECT WHERE (replace(User::email, ".", "-") LIKE "%gmail-com")
+ * ```
+ *
+ * @param element The string to perform replacement on
+ * @param old The substring to find and replace
+ * @param new The replacement string
+ * @return ClauseString with replacements applied
+ */
+public fun Table.replace(element: ClauseString, old: String, new: String): ClauseString =
+ ClauseString("replace(${element.valueName},'$old','$new')", this, true)
+
+/**
+ * INSTR scalar function - finds the first occurrence of a substring.
+ *
+ * Returns the 1-indexed position of the first occurrence of `sub` in the string.
+ * Returns 0 if the substring is not found.
+ *
+ * Example:
+ * ```kotlin
+ * // Find position of '@' in email
+ * SELECT WHERE (instr(User::email, "@") GT 0)
+ * ```
+ *
+ * @param element The string to search in
+ * @param sub The substring to find
+ * @return ClauseNumber representing the position (1-indexed) or 0 if not found
+ */
+public fun Table.instr(element: ClauseString, sub: String): ClauseNumber =
+ ClauseNumber("instr(${element.valueName},'$sub')", this, true)
+
+/**
+ * PRINTF scalar function - formats a string according to a format specification.
+ *
+ * Works similar to the standard C printf() function. The format string can contain
+ * format specifiers like %s (string), %d (integer), %f (float), etc.
+ *
+ * Example:
+ * ```kotlin
+ * // Format price with currency
+ * SELECT(printf("$%.2f", Product::price))
+ * ```
+ *
+ * @param format The format string with format specifiers
+ * @param element The value to format
+ * @return ClauseString with the formatted result
+ */
+public fun Table.printf(format: String, element: ClauseString): ClauseString =
+ ClauseString("printf('$format',${element.valueName})", this, true)
\ No newline at end of file
diff --git a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/operation/Create.kt b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/operation/Create.kt
index 20ca770..72ae297 100644
--- a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/operation/Create.kt
+++ b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/operation/Create.kt
@@ -18,6 +18,7 @@ package com.ctrip.sqllin.dsl.sql.operation
import com.ctrip.sqllin.driver.DatabaseConnection
import com.ctrip.sqllin.dsl.sql.Table
+import com.ctrip.sqllin.dsl.sql.clause.ClauseElement
import com.ctrip.sqllin.dsl.sql.statement.SingleStatement
import com.ctrip.sqllin.dsl.sql.statement.TableStructureStatement
@@ -33,7 +34,10 @@ import com.ctrip.sqllin.dsl.sql.statement.TableStructureStatement
internal object Create : Operation {
override val sqlStr: String
- get() = ""
+ get() = "CREATE "
+
+ private const val INDEX = "INDEX "
+ private const val UNIQUE_INDEX = "UNIQUE INDEX "
/**
* Builds a CREATE TABLE statement for the given table definition.
@@ -42,6 +46,80 @@ internal object Create : Operation {
* @param connection Database connection for execution
* @return CREATE statement ready for execution
*/
- fun create(table: Table, connection: DatabaseConnection): SingleStatement =
+ fun createTable(table: Table, connection: DatabaseConnection): SingleStatement =
TableStructureStatement(table.createSQL, connection)
+
+ /**
+ * Builds a CREATE INDEX statement for the specified table and columns.
+ *
+ * Creates a regular (non-unique) index to improve query performance on the specified columns.
+ * The generated SQL follows the format: `CREATE INDEX index_name ON table_name(column1, column2, ...)`
+ *
+ * @param table Table definition to create the index on
+ * @param connection Database connection for execution
+ * @param indexName Name for the new index
+ * @param columns One or more columns to include in the index
+ * @return CREATE INDEX statement ready for execution
+ * @throws IllegalArgumentException if no columns are specified
+ */
+ fun createIndex(table: Table, connection: DatabaseConnection, indexName: String, vararg columns: ClauseElement): SingleStatement {
+ require(columns.isNotEmpty()) { "You must create an index for at least one column." }
+ return createIndex(INDEX, table, connection, indexName, *columns)
+ }
+
+ /**
+ * Builds a CREATE UNIQUE INDEX statement for the specified table and columns.
+ *
+ * Creates a unique index that enforces uniqueness constraints on the indexed columns
+ * while also improving query performance. The generated SQL follows the format:
+ * `CREATE UNIQUE INDEX index_name ON table_name(column1, column2, ...)`
+ *
+ * @param table Table definition to create the unique index on
+ * @param connection Database connection for execution
+ * @param indexName Name for the new unique index
+ * @param columns One or more columns to include in the unique index
+ * @return CREATE UNIQUE INDEX statement ready for execution
+ * @throws IllegalArgumentException if no columns are specified
+ */
+ fun createUniqueIndex(table: Table, connection: DatabaseConnection, indexName: String, vararg columns: ClauseElement): SingleStatement {
+ require(columns.isNotEmpty()) { "You must create an index for at least one column." }
+ return createIndex(UNIQUE_INDEX, table, connection, indexName, *columns)
+ }
+
+ /**
+ * Internal helper function to build CREATE INDEX statements with different prefixes.
+ *
+ * Constructs the SQL string for creating either a regular or unique index based on the prefix.
+ *
+ * @param prefix Either "INDEX " or "UNIQUE INDEX " to specify the index type
+ * @param table Table definition to create the index on
+ * @param connection Database connection for execution
+ * @param indexName Name for the new index
+ * @param columns One or more columns to include in the index
+ * @return CREATE INDEX statement ready for execution
+ * @throws IllegalArgumentException if no columns are specified
+ */
+ private fun createIndex(prefix: String, table: Table, connection: DatabaseConnection, indexName: String, vararg columns: ClauseElement): SingleStatement {
+ val sql = buildString {
+ append(sqlStr)
+ append(prefix)
+ append(indexName)
+ append(" ON ")
+ append(table.tableName)
+ append('(')
+ val iterator = columns.iterator()
+ if (!iterator.hasNext())
+ throw IllegalArgumentException("You must create an index for at least one column.")
+ // Extract column name without table prefix (e.g., "book.name" -> "name")
+ val firstColumn = iterator.next().valueName.substringAfterLast('.')
+ append(firstColumn)
+ while (iterator.hasNext()) {
+ append(',')
+ val columnName = iterator.next().valueName.substringAfterLast('.')
+ append(columnName)
+ }
+ append(')')
+ }
+ return TableStructureStatement(sql, connection)
+ }
}
\ No newline at end of file
diff --git a/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/operation/PRAGMA.kt b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/operation/PRAGMA.kt
new file mode 100644
index 0000000..e8360cc
--- /dev/null
+++ b/sqllin-dsl/src/commonMain/kotlin/com/ctrip/sqllin/dsl/sql/operation/PRAGMA.kt
@@ -0,0 +1,91 @@
+/*
+ * Copyright (C) 2025 Ctrip.com.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.ctrip.sqllin.dsl.sql.operation
+
+import com.ctrip.sqllin.driver.DatabaseConnection
+import com.ctrip.sqllin.dsl.sql.statement.SingleStatement
+import com.ctrip.sqllin.dsl.sql.statement.TableStructureStatement
+
+/**
+ * SQLite PRAGMA command operations for database configuration.
+ *
+ * This object provides methods to generate PRAGMA SQL statements that configure
+ * various SQLite database settings. PRAGMA statements are special SQLite commands
+ * that query or modify database operation parameters.
+ *
+ * ### Available PRAGMAs
+ * - **foreign_keys**: Enable or disable foreign key constraint enforcement
+ *
+ * ### Usage
+ * This object is used internally by [DatabaseScope][com.ctrip.sqllin.dsl.DatabaseScope]
+ * to generate PRAGMA statements when calling functions like `PRAGMA_FOREIGN_KEYS`.
+ *
+ * @author Yuang Qiao
+ * @see com.ctrip.sqllin.dsl.DatabaseScope.PRAGMA_FOREIGN_KEYS
+ */
+internal object PRAGMA : Operation {
+
+ override val sqlStr: String
+ get() = "PRAGMA "
+
+ /**
+ * Generates a PRAGMA statement to enable or disable foreign key constraint enforcement.
+ *
+ * SQLite disables foreign key constraints by default for backward compatibility.
+ * This method creates a statement that enables or disables foreign key enforcement
+ * for the current database connection.
+ *
+ * ### Important Notes
+ * - This setting is **per-connection** and must be set each time a database is opened
+ * - The setting **cannot be changed** inside a transaction
+ * - When enabled, all INSERT, UPDATE, and DELETE operations will enforce foreign key constraints
+ * - When disabled, foreign key constraints are part of the schema but not enforced
+ *
+ * ### Generated SQL
+ * ```sql
+ * PRAGMA foreign_keys=1; -- Enable foreign keys
+ * PRAGMA foreign_keys=0; -- Disable foreign keys
+ * ```
+ *
+ * ### Example Usage
+ * ```kotlin
+ * database {
+ * PRAGMA_FOREIGN_KEYS(true) // Enable enforcement
+ * CREATE(OrderTable) // Create table with foreign keys
+ *
+ * // Now foreign key constraints will be enforced
+ * OrderTable INSERT Order(userId = 999) // Fails if user 999 doesn't exist
+ * }
+ * ```
+ *
+ * @param enable `true` to enable foreign key enforcement, `false` to disable
+ * @param connection The database connection to execute the statement on
+ * @return A [SingleStatement] that executes the PRAGMA command
+ *
+ * @see com.ctrip.sqllin.dsl.DatabaseScope.PRAGMA_FOREIGN_KEYS
+ * @see com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup
+ * @see com.ctrip.sqllin.dsl.annotation.References
+ */
+ fun foreignKeys(enable: Boolean, connection: DatabaseConnection): SingleStatement {
+ val sql = buildString {
+ append(sqlStr)
+ append("foreign_keys=")
+ append(if (enable) "1;" else "0;")
+ }
+ return TableStructureStatement(sql, connection)
+ }
+}
\ No newline at end of file
diff --git a/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ClauseProcessor.kt b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ClauseProcessor.kt
index b8b3fa3..d2906ba 100644
--- a/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ClauseProcessor.kt
+++ b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ClauseProcessor.kt
@@ -24,7 +24,6 @@ import com.google.devtools.ksp.processing.SymbolProcessorEnvironment
import com.google.devtools.ksp.symbol.*
import com.google.devtools.ksp.validate
import java.io.OutputStreamWriter
-import java.lang.IllegalStateException
/**
* KSP symbol processor that generates table objects for database entities.
@@ -71,19 +70,8 @@ class ClauseProcessor(
*/
private companion object {
const val ANNOTATION_DATABASE_ROW_NAME = "com.ctrip.sqllin.dsl.annotation.DBRow"
- const val ANNOTATION_PRIMARY_KEY = "com.ctrip.sqllin.dsl.annotation.PrimaryKey"
- const val ANNOTATION_COMPOSITE_PRIMARY_KEY = "com.ctrip.sqllin.dsl.annotation.CompositePrimaryKey"
- const val ANNOTATION_UNIQUE = "com.ctrip.sqllin.dsl.annotation.Unique"
- const val ANNOTATION_COMPOSITE_UNIQUE = "com.ctrip.sqllin.dsl.annotation.CompositeUnique"
- const val ANNOTATION_NO_CASE = "com.ctrip.sqllin.dsl.annotation.CollateNoCase"
const val ANNOTATION_SERIALIZABLE = "kotlinx.serialization.Serializable"
const val ANNOTATION_TRANSIENT = "kotlinx.serialization.Transient"
-
- const val PROMPT_CANT_ADD_BOTH_ANNOTATION = "You can't add both @PrimaryKey and @CompositePrimaryKey to the same property."
- const val PROMPT_PRIMARY_KEY_MUST_NOT_NULL = "The primary key must be not-null."
- const val PROMPT_PRIMARY_KEY_TYPE = """The primary key's type must be Long when you set the the parameter "isAutoincrement = true" in annotation PrimaryKey."""
- const val PROMPT_PRIMARY_KEY_USE_COUNT = "You only could use PrimaryKey to annotate one property in a class."
- const val PROMPT_NO_CASE_MUST_FOR_TEXT = "You only could add annotation @CollateNoCase for a String or Char typed property."
}
/**
@@ -105,6 +93,9 @@ class ClauseProcessor(
if (classDeclaration.annotations.all { !it.annotationType.resolve().isAssignableFrom(serializableType) })
continue // Don't handle the classes that didn't be annotated 'Serializable'
+ val foreignKeyParser = ForeignKeyParser()
+ foreignKeyParser.parseGroups(classDeclaration.annotations)
+
val className = classDeclaration.simpleName.asString()
val packageName = classDeclaration.packageName.asString()
val objectName = "${className}Table"
@@ -137,17 +128,8 @@ class ClauseProcessor(
writer.write(" inline operator fun invoke(block: $objectName.(table: $objectName) -> R): R = this.block(this)\n\n")
val transientName = resolver.getClassDeclarationByName(ANNOTATION_TRANSIENT)!!.asStarProjectedType()
- val primaryKeyAnnotationName = resolver.getClassDeclarationByName(ANNOTATION_PRIMARY_KEY)!!.asStarProjectedType()
- val compositePrimaryKeyName = resolver.getClassDeclarationByName(ANNOTATION_COMPOSITE_PRIMARY_KEY)!!.asStarProjectedType()
- val noCaseAnnotationName = resolver.getClassDeclarationByName(ANNOTATION_NO_CASE)!!.asStarProjectedType()
- val uniqueAnnotationName = resolver.getClassDeclarationByName(ANNOTATION_UNIQUE)!!.asStarProjectedType()
-
- // Primary key tracking for metadata generation
- var primaryKeyName: String? = null
- var isAutomaticIncrement = false
- var isRowId = false
- val compositePrimaryKeys = ArrayList()
- var isContainsPrimaryKey = false
+
+ val columnConstraintParser = ColumnConstraintParser(resolver)
// CREATE TABLE statement builder (compile-time generation)
val createSQLBuilder = StringBuilder("CREATE TABLE ").apply {
@@ -155,9 +137,6 @@ class ClauseProcessor(
append('(')
}
- // Track composite unique constraints: group number → list of column names
- val compositeUniqueColumns = HashMap>()
-
// Filter out @Transient properties and convert to list for indexed iteration
val propertyList = classDeclaration.getAllProperties().filter { classDeclaration ->
!classDeclaration.annotations.any { ksAnnotation -> ksAnnotation.annotationType.resolve().isAssignableFrom(transientName) }
@@ -170,77 +149,14 @@ class ClauseProcessor(
val elementName = "$className.serializer().descriptor.getElementName($index)"
val isNotNull = property.type.resolve().nullability == Nullability.NOT_NULL
- // Collect the information of the primary key(s).
- val annotations = property.annotations.map { it.annotationType.resolve() }
- val isPrimaryKey = annotations.any { it.isAssignableFrom(primaryKeyAnnotationName) }
-
// Build column definition: name, type, and constraints
with(createSQLBuilder) {
append(propertyName)
- val type = getSQLiteType(property, isPrimaryKey)
- append(type)
-
- // Handle @PrimaryKey annotation
- if (isPrimaryKey) {
- check(!annotations.any { it.isAssignableFrom(compositePrimaryKeyName) }) { PROMPT_CANT_ADD_BOTH_ANNOTATION }
- check(!isNotNull) { PROMPT_PRIMARY_KEY_MUST_NOT_NULL }
- check(!isContainsPrimaryKey) { PROMPT_PRIMARY_KEY_USE_COUNT }
- isContainsPrimaryKey = true
- primaryKeyName = propertyName
-
- append(" PRIMARY KEY")
-
- isAutomaticIncrement = property.annotations.find {
- it.annotationType.resolve().declaration.qualifiedName?.asString() == ANNOTATION_PRIMARY_KEY
- }?.arguments?.firstOrNull()?.value as? Boolean ?: false
- val isLong = type == " INTEGER" || type == " BIGINT"
- if (isAutomaticIncrement) {
- check(isLong) { PROMPT_PRIMARY_KEY_TYPE }
- append(" AUTOINCREMENT")
- }
- isRowId = isLong
- } else if (annotations.any { it.isAssignableFrom(compositePrimaryKeyName) }) {
- // Handle @CompositePrimaryKey - collect for table-level constraint
- check(isNotNull) { PROMPT_PRIMARY_KEY_MUST_NOT_NULL }
- compositePrimaryKeys.add(propertyName)
- } else if (isNotNull) {
- // Add NOT NULL constraint for non-nullable, non-PK columns
- append(" NOT NULL")
- }
-
- // Handle @CollateNoCase annotation - must be on text columns
- if (annotations.any { it.isAssignableFrom(noCaseAnnotationName) }) {
- check(type == " TEXT" || type == " CHAR(1)") { PROMPT_NO_CASE_MUST_FOR_TEXT }
- append(" COLLATE NOCASE")
- }
-
- // Handle @Unique annotation - single column uniqueness
- if (annotations.any { it.isAssignableFrom(uniqueAnnotationName) })
- append(" UNIQUE")
-
- // Handle @CompositeUnique annotation - collect for table-level constraint
- val compositeUniqueAnnotation = property.annotations
- .find { it.annotationType.resolve().declaration.qualifiedName?.asString() == ANNOTATION_COMPOSITE_UNIQUE }
-
- compositeUniqueAnnotation?.run {
- // Extract group numbers from annotation (defaults to group 0 if not specified)
- arguments
- .firstOrNull { it.name?.asString() == "group" }
- .let {
- val list = if (it == null) {
- listOf(0) // Default to group 0
- } else {
- it.value as? List ?: listOf(0)
- }
- // Add this property to each specified group
- list.forEach { group ->
- val groupList = compositeUniqueColumns[group] ?: ArrayList().also { gl ->
- compositeUniqueColumns[group] = gl
- }
- groupList.add(propertyName)
- }
- }
- }
+
+ columnConstraintParser.parseProperty(this, property, propertyName, isNotNull)
+
+ // Handle @Reference and @ForeignKey
+ foreignKeyParser.parseColumnAnnotations(createSQLBuilder, property.annotations, propertyName, isNotNull)
if (index < propertyList.lastIndex)
append(',')
@@ -250,12 +166,10 @@ class ClauseProcessor(
writer.write(" @ColumnNameDslMaker\n")
writer.write(" val $propertyName\n")
writer.write(" get() = $clauseElementTypeName($elementName, this)\n\n")
-
- // Write 'SetClause' code.
writer.write(" @ColumnNameDslMaker\n")
writer.write(" var SetClause<$className>.$propertyName: ${property.typeName}")
val nullableSymbol = when {
- isRowId -> "?\n"
+ columnConstraintParser.isRowId -> "?\n"
isNotNull -> "\n"
else -> "?\n"
}
@@ -264,58 +178,9 @@ class ClauseProcessor(
writer.write(" set(value) = ${appendFunction(elementName, property)}\n\n")
}
- // Write the override instance for property `primaryKeyInfo`.
- if (primaryKeyName == null && compositePrimaryKeys.isEmpty()) {
- writer.write(" override val primaryKeyInfo = null\n\n")
- } else {
- writer.write(" override val primaryKeyInfo = PrimaryKeyInfo(\n")
- if (primaryKeyName == null) {
- writer.write(" primaryKeyName = null,\n")
- } else {
- writer.write(" primaryKeyName = \"$primaryKeyName\",\n")
- }
- writer.write(" isAutomaticIncrement = $isAutomaticIncrement,\n")
- writer.write(" isRowId = $isRowId,\n")
- if (compositePrimaryKeys.isEmpty()) {
- writer.write(" compositePrimaryKeys = null,\n")
- } else {
- writer.write(" compositePrimaryKeys = listOf(\n")
- compositePrimaryKeys.forEach {
- writer.write(" \"$it\",\n")
- }
- writer.write(" )\n")
- }
- writer.write(" )\n\n")
- }
-
- // Append table-level constraints to CREATE TABLE statement
- with(createSQLBuilder) {
- // Add composite primary key constraint if present
- compositePrimaryKeys.takeIf { it.isNotEmpty() }?.let {
- append(",PRIMARY KEY(")
- append(it[0])
- for (i in 1 ..< it.size) {
- append(',')
- append(it[i])
- }
- append(')')
- }
-
- // Add composite unique constraints for each group
- compositeUniqueColumns.values.forEach {
- if (it.isEmpty())
- return@forEach
- append(",UNIQUE(")
- append(it[0])
- for (i in 1 ..< it.size) {
- append(',')
- append(it[i])
- }
- append(')')
- }
-
- append(')')
- }
+ columnConstraintParser.generateCodeForPrimaryKey(writer, createSQLBuilder)
+ foreignKeyParser.generateCodeForForeignKey(createSQLBuilder)
+ createSQLBuilder.append(')')
writer.write(" override val createSQL = \"$createSQLBuilder\"\n")
@@ -492,71 +357,4 @@ class ClauseProcessor(
FullNameCache.BYTE_ARRAY -> "appendAny($elementName, value)"
else -> null
}
-
- /**
- * Determines the SQLite type declaration for a given property.
- *
- * This function resolves the Kotlin type of a property to its corresponding SQLite type
- * string, handling type aliases and enum classes. The result is used in compile-time
- * CREATE TABLE statement generation.
- *
- * ### Type Resolution Strategy
- * 1. **Type Aliases**: Resolves to the underlying type, then maps to SQLite type
- * 2. **Enum Classes**: Maps to SQLite INT type (enums are stored as ordinals)
- * 3. **Standard Types**: Direct mapping via [FullNameCache.getSQLTypeName]
- *
- * ### Primary Key Special Handling
- * When `isPrimaryKey` is true and the property is of type [Long], the function returns
- * " INTEGER" instead of " BIGINT" to enable SQLite's rowid aliasing optimization.
- *
- * ### Example Mappings
- * ```kotlin
- * // Standard type
- * val age: Int // → " INT"
- *
- * // Type alias
- * typealias UserId = Long
- * val id: UserId // → " BIGINT" (or " INTEGER" if primary key)
- *
- * // Enum class
- * enum class Status { ACTIVE, INACTIVE }
- * val status: Status // → " INT"
- * ```
- *
- * @param property The KSP property declaration to analyze
- * @param isPrimaryKey Whether this property is annotated with [@PrimaryKey]
- * @return SQLite type declaration string with leading space (e.g., " INT", " TEXT")
- * @throws IllegalStateException if the property type is not supported by SQLlin
- *
- * @see FullNameCache.getSQLTypeName
- */
- private fun getSQLiteType(property: KSPropertyDeclaration, isPrimaryKey: Boolean): String {
- val declaration = property.type.resolve().declaration
- return when (declaration) {
- is KSTypeAlias -> {
- val realDeclaration = declaration.type.resolve().declaration
- FullNameCache.getSQLTypeName(realDeclaration.typeName, isPrimaryKey) ?: kotlin.run {
- if (realDeclaration is KSClassDeclaration && realDeclaration.classKind == ClassKind.ENUM_CLASS)
- FullNameCache.getSQLTypeName(FullNameCache.INT, isPrimaryKey)
- else
- null
- }
- }
- is KSClassDeclaration if declaration.classKind == ClassKind.ENUM_CLASS ->
- FullNameCache.getSQLTypeName(FullNameCache.INT, isPrimaryKey)
- else -> FullNameCache.getSQLTypeName(declaration.typeName, isPrimaryKey)
- } ?: throw IllegalStateException("Hasn't support the type '${declaration.typeName}' yet")
- }
-
- /**
- * Extension property that resolves a property's fully qualified type name.
- */
- private inline val KSPropertyDeclaration.typeName
- get() = type.resolve().declaration.qualifiedName?.asString()
-
- /**
- * Extension property that retrieves a declaration's fully qualified type name.
- */
- private inline val KSDeclaration.typeName
- get() = qualifiedName?.asString()
}
\ No newline at end of file
diff --git a/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ColumnConstraintParser.kt b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ColumnConstraintParser.kt
new file mode 100644
index 0000000..854afad
--- /dev/null
+++ b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ColumnConstraintParser.kt
@@ -0,0 +1,442 @@
+/*
+ * Copyright (C) 2025 Ctrip.com.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.ctrip.sqllin.processor
+
+import com.google.devtools.ksp.getClassDeclarationByName
+import com.google.devtools.ksp.processing.Resolver
+import com.google.devtools.ksp.symbol.ClassKind
+import com.google.devtools.ksp.symbol.KSClassDeclaration
+import com.google.devtools.ksp.symbol.KSPropertyDeclaration
+import com.google.devtools.ksp.symbol.KSTypeAlias
+import java.io.Writer
+
+/**
+ * Parser for column constraint annotations during CREATE TABLE statement generation.
+ *
+ * This class processes primary key, uniqueness, and collation annotations on properties
+ * to generate the appropriate SQLite column constraints in CREATE TABLE statements.
+ * It was extracted from [ClauseProcessor] to improve code organization and separation of concerns.
+ *
+ * ### Processing Workflow
+ * 1. **Parse property annotations**: [parseProperty] extracts constraint metadata and appends SQL
+ * 2. **Generate metadata**: [generateCodeForPrimaryKey] creates runtime metadata and table-level constraints
+ *
+ * ### Supported Annotations
+ *
+ * #### Primary Keys
+ * - **[@PrimaryKey][com.ctrip.sqllin.dsl.annotation.PrimaryKey]**: Single-column primary key with optional AUTOINCREMENT
+ * - **[@CompositePrimaryKey][com.ctrip.sqllin.dsl.annotation.CompositePrimaryKey]**: Multi-column primary key (table-level)
+ *
+ * #### Uniqueness Constraints
+ * - **[@Unique][com.ctrip.sqllin.dsl.annotation.Unique]**: Single-column UNIQUE constraint
+ * - **[@CompositeUnique][com.ctrip.sqllin.dsl.annotation.CompositeUnique]**: Multi-column UNIQUE constraint with group support
+ *
+ * #### Collation
+ * - **[@CollateNoCase][com.ctrip.sqllin.dsl.annotation.CollateNoCase]**: Case-insensitive text comparison (for String/Char only)
+ *
+ * ### Example Usage
+ * ```kotlin
+ * // In ClauseProcessor
+ * val parser = ColumnConstraintParser(resolver)
+ *
+ * // For each property:
+ * parser.parseProperty(sqlBuilder, property, "userId", isNotNull = true)
+ * // Appends: " BIGINT NOT NULL"
+ *
+ * // After all properties processed:
+ * parser.generateCodeForPrimaryKey(writer, sqlBuilder)
+ * // Generates: override val primaryKeyInfo = PrimaryKeyInfo(...)
+ * // Appends: ",PRIMARY KEY(col1,col2)" for composite keys
+ * ```
+ *
+ * ### Validation Rules
+ * - Cannot use both [@PrimaryKey] and [@CompositePrimaryKey] on the same property
+ * - Primary key properties must be nullable (SQLite rowid aliasing requirement)
+ * - Only one [@PrimaryKey] annotation allowed per table
+ * - AUTOINCREMENT requires Long type (mapped to INTEGER in SQLite)
+ * - [@CollateNoCase] can only be applied to String or Char properties
+ * - [@CompositePrimaryKey] properties must be non-nullable
+ *
+ * @param resolver KSP resolver for looking up annotation types
+ *
+ * @author Yuang Qiao
+ * @see ClauseProcessor
+ * @see com.ctrip.sqllin.dsl.annotation.PrimaryKey
+ * @see com.ctrip.sqllin.dsl.annotation.CompositePrimaryKey
+ * @see com.ctrip.sqllin.dsl.annotation.Unique
+ * @see com.ctrip.sqllin.dsl.annotation.CompositeUnique
+ * @see com.ctrip.sqllin.dsl.annotation.CollateNoCase
+ */
+class ColumnConstraintParser(resolver: Resolver) {
+
+ private companion object {
+ const val ANNOTATION_PRIMARY_KEY = "com.ctrip.sqllin.dsl.annotation.PrimaryKey"
+ const val ANNOTATION_COMPOSITE_PRIMARY_KEY = "com.ctrip.sqllin.dsl.annotation.CompositePrimaryKey"
+ const val ANNOTATION_UNIQUE = "com.ctrip.sqllin.dsl.annotation.Unique"
+ const val ANNOTATION_COMPOSITE_UNIQUE = "com.ctrip.sqllin.dsl.annotation.CompositeUnique"
+ const val ANNOTATION_NO_CASE = "com.ctrip.sqllin.dsl.annotation.CollateNoCase"
+
+ const val PROMPT_CANT_ADD_BOTH_ANNOTATION = "You can't add both @PrimaryKey and @CompositePrimaryKey to the same property."
+ const val PROMPT_PRIMARY_KEY_MUST_NOT_NULL = "The primary key must be not-null."
+ const val PROMPT_PRIMARY_KEY_TYPE = """The primary key's type must be Long when you set the the parameter "isAutoincrement = true" in annotation PrimaryKey."""
+ const val PROMPT_PRIMARY_KEY_USE_COUNT = "You only could use PrimaryKey to annotate one property in a class."
+ const val PROMPT_NO_CASE_MUST_FOR_TEXT = "You only could add annotation @CollateNoCase for a String or Char typed property."
+ }
+
+ private val primaryKeyAnnotationName = resolver.getClassDeclarationByName(ANNOTATION_PRIMARY_KEY)!!.asStarProjectedType()
+ private val compositePrimaryKeyName = resolver.getClassDeclarationByName(ANNOTATION_COMPOSITE_PRIMARY_KEY)!!.asStarProjectedType()
+ private val noCaseAnnotationName = resolver.getClassDeclarationByName(ANNOTATION_NO_CASE)!!.asStarProjectedType()
+ private val uniqueAnnotationName = resolver.getClassDeclarationByName(ANNOTATION_UNIQUE)!!.asStarProjectedType()
+
+ // Primary key tracking for metadata generation
+ private var primaryKeyName: String? = null
+ private var isAutomaticIncrement = false
+ var isRowId = false
+ private set
+ private val compositePrimaryKeys = ArrayList()
+ private var isContainsPrimaryKey = false
+
+ // Track composite unique constraints: group number → list of column names
+ private val compositeUniqueColumns = HashMap>()
+
+ /**
+ * Parses property annotations and appends corresponding SQLite constraints to the CREATE TABLE statement.
+ *
+ * This method processes all constraint-related annotations on a property and generates the appropriate
+ * SQLite type declaration and constraint clauses. It accumulates metadata for table-level constraints
+ * (composite primary keys and composite unique constraints) which are later output by [generateCodeForPrimaryKey].
+ *
+ * ### Generated SQL Patterns
+ *
+ * #### Basic Type with NOT NULL
+ * ```kotlin
+ * val age: Int // Non-nullable
+ * // Generated: age INT NOT NULL
+ * ```
+ *
+ * #### Primary Key
+ * ```kotlin
+ * @PrimaryKey(isAutoincrement = true)
+ * val id: Long?
+ * // Generated: id INTEGER PRIMARY KEY AUTOINCREMENT
+ * ```
+ *
+ * #### Composite Primary Key
+ * ```kotlin
+ * @CompositePrimaryKey
+ * val userId: Long
+ * // Column: userId BIGINT
+ * // Later appended: ,PRIMARY KEY(userId,productId)
+ * ```
+ *
+ * #### Unique with Collation
+ * ```kotlin
+ * @Unique
+ * @CollateNoCase
+ * val username: String
+ * // Generated: username TEXT COLLATE NOCASE UNIQUE
+ * ```
+ *
+ * #### Composite Unique (Multi-Group)
+ * ```kotlin
+ * @CompositeUnique(group = [0, 1])
+ * val email: String
+ * // Column: email TEXT
+ * // Later appended: ,UNIQUE(email,phone),UNIQUE(email,username)
+ * ```
+ *
+ * ### Processing Order
+ * 1. Determine SQLite type via [getSQLiteType]
+ * 2. Apply PRIMARY KEY constraint if [@PrimaryKey] present
+ * 3. Collect [@CompositePrimaryKey] columns for table-level constraint
+ * 4. Apply NOT NULL for non-nullable, non-PK columns
+ * 5. Apply COLLATE NOCASE if [@CollateNoCase] present
+ * 6. Apply UNIQUE if [@Unique] present
+ * 7. Collect [@CompositeUnique] groups for table-level constraints
+ *
+ * ### State Mutations
+ * This method mutates internal state that is read by [generateCodeForPrimaryKey]:
+ * - Sets [primaryKeyName] for single-column primary keys
+ * - Adds to [compositePrimaryKeys] for composite primary keys
+ * - Populates [compositeUniqueColumns] for composite unique constraints
+ * - Updates [isAutomaticIncrement] and [isRowId] flags
+ *
+ * @param createSQLBuilder StringBuilder to append column definition and constraints to
+ * @param property The property declaration to process
+ * @param propertyName The name of the database column (may differ from property name)
+ * @param isNotNull Whether the property type is non-nullable in Kotlin
+ *
+ * @throws IllegalArgumentException if validation fails (see class-level documentation for rules)
+ *
+ * @see generateCodeForPrimaryKey
+ * @see getSQLiteType
+ */
+ @Suppress("UNCHECKED_CAST")
+ fun parseProperty(
+ createSQLBuilder: StringBuilder,
+ property: KSPropertyDeclaration,
+ propertyName: String,
+ isNotNull: Boolean,
+ ) {
+ // Collect the information of the primary key(s).
+ val annotationKSType = property.annotations.map { it.annotationType.resolve() }
+ val isPrimaryKey = annotationKSType.any { it.isAssignableFrom(primaryKeyAnnotationName) }
+
+ with(createSQLBuilder) {
+ val type = getSQLiteType(property, isPrimaryKey)
+ append(type)
+
+ // Handle @PrimaryKey annotation
+ if (isPrimaryKey) {
+ check(!annotationKSType.any { it.isAssignableFrom(compositePrimaryKeyName) }) { PROMPT_CANT_ADD_BOTH_ANNOTATION }
+ check(!isNotNull) { PROMPT_PRIMARY_KEY_MUST_NOT_NULL }
+ check(!isContainsPrimaryKey) { PROMPT_PRIMARY_KEY_USE_COUNT }
+ isContainsPrimaryKey = true
+ primaryKeyName = propertyName
+
+ append(" PRIMARY KEY")
+
+ isAutomaticIncrement = property.annotations.find {
+ it.annotationType.resolve().declaration.qualifiedName?.asString() == ANNOTATION_PRIMARY_KEY
+ }?.arguments?.firstOrNull()?.value as? Boolean ?: false
+ val isLong = type == " INTEGER" || type == " BIGINT"
+ if (isAutomaticIncrement) {
+ check(isLong) { PROMPT_PRIMARY_KEY_TYPE }
+ append(" AUTOINCREMENT")
+ }
+ isRowId = isLong
+ } else if (annotationKSType.any { it.isAssignableFrom(compositePrimaryKeyName) }) {
+ // Handle @CompositePrimaryKey - collect for table-level constraint
+ check(isNotNull) { PROMPT_PRIMARY_KEY_MUST_NOT_NULL }
+ compositePrimaryKeys.add(propertyName)
+ } else if (isNotNull) {
+ // Add NOT NULL constraint for non-nullable, non-PK columns
+ append(" NOT NULL")
+ }
+
+ // Handle @CollateNoCase annotation - must be on text columns
+ if (annotationKSType.any { it.isAssignableFrom(noCaseAnnotationName) }) {
+ check(type == " TEXT" || type == " CHAR(1)") { PROMPT_NO_CASE_MUST_FOR_TEXT }
+ append(" COLLATE NOCASE")
+ }
+
+ // Handle @Unique annotation - single column uniqueness
+ if (annotationKSType.any { it.isAssignableFrom(uniqueAnnotationName) })
+ append(" UNIQUE")
+
+ // Handle @CompositeUnique annotation - collect for table-level constraint
+ val compositeUniqueAnnotation = property.annotations
+ .find { it.annotationType.resolve().declaration.qualifiedName?.asString() == ANNOTATION_COMPOSITE_UNIQUE }
+
+ compositeUniqueAnnotation?.run {
+ // Extract group numbers from annotation (defaults to group 0 if not specified)
+ arguments
+ .firstOrNull { it.name?.asString() == "group" }
+ .let {
+ val list = if (it == null) {
+ listOf(0) // Default to group 0
+ } else {
+ it.value as? List ?: listOf(0)
+ }
+ // Add this property to each specified group
+ list.forEach { group ->
+ val groupList = compositeUniqueColumns[group] ?: ArrayList().also { gl ->
+ compositeUniqueColumns[group] = gl
+ }
+ groupList.add(propertyName)
+ }
+ }
+ }
+ }
+ }
+
+ /**
+ * Generates runtime primary key metadata and appends table-level constraints to CREATE TABLE statement.
+ *
+ * This method performs two critical tasks:
+ * 1. **Writes Kotlin code** to the output file that overrides the `primaryKeyInfo` property
+ * 2. **Appends SQL** to the CREATE TABLE statement for composite primary keys and composite unique constraints
+ *
+ * This method must be called **after** all properties have been processed by [parseProperty],
+ * as it consumes the accumulated state from those calls.
+ *
+ * ### Generated Kotlin Code Patterns
+ *
+ * #### No Primary Key
+ * ```kotlin
+ * override val primaryKeyInfo = null
+ * ```
+ *
+ * #### Single-Column Primary Key
+ * ```kotlin
+ * override val primaryKeyInfo = PrimaryKeyInfo(
+ * primaryKeyName = "id",
+ * isAutomaticIncrement = true,
+ * isRowId = true,
+ * compositePrimaryKeys = null,
+ * )
+ * ```
+ *
+ * #### Composite Primary Key
+ * ```kotlin
+ * override val primaryKeyInfo = PrimaryKeyInfo(
+ * primaryKeyName = null,
+ * isAutomaticIncrement = false,
+ * isRowId = false,
+ * compositePrimaryKeys = listOf(
+ * "userId",
+ * "productId",
+ * )
+ * )
+ * ```
+ *
+ * ### Appended SQL Patterns
+ *
+ * #### Composite Primary Key Constraint
+ * ```sql
+ * ,PRIMARY KEY(userId,productId)
+ * ```
+ *
+ * #### Composite Unique Constraints (Multiple Groups)
+ * ```sql
+ * ,UNIQUE(email,phone)
+ * ,UNIQUE(username,displayName)
+ * ```
+ *
+ * ### State Dependencies
+ * This method reads state accumulated by [parseProperty]:
+ * - [primaryKeyName]: Name of single-column primary key (if any)
+ * - [isAutomaticIncrement]: Whether AUTOINCREMENT is enabled
+ * - [isRowId]: Whether the primary key can serve as SQLite rowid alias
+ * - [compositePrimaryKeys]: List of columns in composite primary key
+ * - [compositeUniqueColumns]: Map of group number to columns for UNIQUE constraints
+ *
+ * @param writer Writer for generating Kotlin code (primaryKeyInfo property)
+ * @param createSQLBuilder StringBuilder to append table-level SQL constraints to
+ *
+ * @see parseProperty
+ * @see com.ctrip.sqllin.dsl.sql.PrimaryKeyInfo
+ */
+ fun generateCodeForPrimaryKey(writer: Writer, createSQLBuilder: StringBuilder) {
+ // Write the override instance for property `primaryKeyInfo`.
+ with(writer) {
+ if (primaryKeyName == null && compositePrimaryKeys.isEmpty()) {
+ write(" override val primaryKeyInfo = null\n\n")
+ } else {
+ write(" override val primaryKeyInfo = PrimaryKeyInfo(\n")
+ if (primaryKeyName == null) {
+ write(" primaryKeyName = null,\n")
+ } else {
+ write(" primaryKeyName = \"$primaryKeyName\",\n")
+ }
+ write(" isAutomaticIncrement = $isAutomaticIncrement,\n")
+ write(" isRowId = $isRowId,\n")
+ if (compositePrimaryKeys.isEmpty()) {
+ write(" compositePrimaryKeys = null,\n")
+ } else {
+ write(" compositePrimaryKeys = listOf(\n")
+ compositePrimaryKeys.forEach {
+ write(" \"$it\",\n")
+ }
+ write(" )\n")
+ }
+ write(" )\n\n")
+ }
+ }
+ // Append table-level constraints to CREATE TABLE statement
+ with(createSQLBuilder) {
+ // Add composite primary key constraint if present
+ compositePrimaryKeys.takeIf { it.isNotEmpty() }?.let {
+ append(",PRIMARY KEY(")
+ append(it[0])
+ for (i in 1 ..< it.size) {
+ append(',')
+ append(it[i])
+ }
+ append(')')
+ }
+
+ // Add composite unique constraints for each group
+ compositeUniqueColumns.values.forEach {
+ if (it.isEmpty())
+ return@forEach
+ append(",UNIQUE(")
+ append(it[0])
+ for (i in 1 ..< it.size) {
+ append(',')
+ append(it[i])
+ }
+ append(')')
+ }
+ }
+ }
+
+ /**
+ * Determines the SQLite type declaration for a given property.
+ *
+ * This function resolves the Kotlin type of a property to its corresponding SQLite type
+ * string, handling type aliases and enum classes. The result is used in compile-time
+ * CREATE TABLE statement generation.
+ *
+ * ### Type Resolution Strategy
+ * 1. **Type Aliases**: Resolves to the underlying type, then maps to SQLite type
+ * 2. **Enum Classes**: Maps to SQLite INT type (enums are stored as ordinals)
+ * 3. **Standard Types**: Direct mapping via [FullNameCache.getSQLTypeName]
+ *
+ * ### Primary Key Special Handling
+ * When `isPrimaryKey` is true and the property is of type [Long], the function returns
+ * " INTEGER" instead of " BIGINT" to enable SQLite's rowid aliasing optimization.
+ *
+ * ### Example Mappings
+ * ```kotlin
+ * // Standard type
+ * val age: Int // → " INT"
+ *
+ * // Type alias
+ * typealias UserId = Long
+ * val id: UserId // → " BIGINT" (or " INTEGER" if primary key)
+ *
+ * // Enum class
+ * enum class Status { ACTIVE, INACTIVE }
+ * val status: Status // → " INT"
+ * ```
+ *
+ * @param property The KSP property declaration to analyze
+ * @param isPrimaryKey Whether this property is annotated with [@PrimaryKey]
+ * @return SQLite type declaration string with leading space (e.g., " INT", " TEXT")
+ * @throws IllegalStateException if the property type is not supported by SQLlin
+ *
+ * @see FullNameCache.getSQLTypeName
+ */
+ private fun getSQLiteType(property: KSPropertyDeclaration, isPrimaryKey: Boolean): String {
+ val declaration = property.type.resolve().declaration
+ return when (declaration) {
+ is KSTypeAlias -> {
+ val realDeclaration = declaration.type.resolve().declaration
+ FullNameCache.getSQLTypeName(realDeclaration.typeName, isPrimaryKey) ?: kotlin.run {
+ if (realDeclaration is KSClassDeclaration && realDeclaration.classKind == ClassKind.ENUM_CLASS)
+ FullNameCache.getSQLTypeName(FullNameCache.INT, isPrimaryKey)
+ else
+ null
+ }
+ }
+ is KSClassDeclaration if declaration.classKind == ClassKind.ENUM_CLASS ->
+ FullNameCache.getSQLTypeName(FullNameCache.INT, isPrimaryKey)
+ else -> FullNameCache.getSQLTypeName(declaration.typeName, isPrimaryKey)
+ } ?: throw IllegalStateException("Hasn't support the type '${declaration.typeName}' yet")
+ }
+}
\ No newline at end of file
diff --git a/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/Converter.kt b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/Converter.kt
new file mode 100644
index 0000000..fe47a23
--- /dev/null
+++ b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/Converter.kt
@@ -0,0 +1,111 @@
+/*
+ * Copyright (C) 2025 Ctrip.com.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.ctrip.sqllin.processor
+
+import com.google.devtools.ksp.symbol.KSDeclaration
+import com.google.devtools.ksp.symbol.KSPropertyDeclaration
+
+/**
+ * Converts a [Trigger][com.ctrip.sqllin.dsl.annotation.Trigger] enum name to its SQL representation.
+ *
+ * This function transforms the Kotlin enum constant name (using underscore separators)
+ * into the corresponding SQL syntax (using space separators).
+ *
+ * ### Examples
+ * ```kotlin
+ * "ON_DELETE_CASCADE".triggerNameToSQL() // Returns: "ON DELETE CASCADE"
+ * "ON_UPDATE_SET_NULL".triggerNameToSQL() // Returns: "ON UPDATE SET NULL"
+ * "ON_DELETE_RESTRICT".triggerNameToSQL() // Returns: "ON DELETE RESTRICT"
+ * ```
+ *
+ * ### Usage
+ * This function is used internally by [ForeignKeyParser] during CREATE TABLE statement
+ * generation to convert [Trigger][com.ctrip.sqllin.dsl.annotation.Trigger] enum values
+ * into valid SQLite syntax.
+ *
+ * @receiver The trigger enum name (e.g., "ON_DELETE_CASCADE")
+ * @return The SQL representation with underscores replaced by spaces (e.g., "ON DELETE CASCADE")
+ */
+fun String.triggerNameToSQL(): String = replace('_', ' ')
+
+/**
+ * Extension property that resolves a property's fully qualified type name.
+ *
+ * This property resolves the property's type through KSP's type system and extracts
+ * its fully qualified name. Used throughout the processor for type mapping and code generation.
+ *
+ * ### Examples
+ * ```kotlin
+ * // For a property: val age: Int
+ * property.typeName // Returns: "kotlin.Int"
+ *
+ * // For a property: val user: com.example.User
+ * property.typeName // Returns: "com.example.User"
+ *
+ * // For a nullable property: val name: String?
+ * property.typeName // Returns: "kotlin.String" (nullability is separate)
+ * ```
+ *
+ * ### Type Resolution
+ * This property performs:
+ * 1. Resolves the property's type (`type.resolve()`)
+ * 2. Gets the declaration of that type
+ * 3. Extracts the fully qualified name
+ *
+ * ### Usage in Processor
+ * - Type mapping to SQLite types in [FullNameCache.getSQLTypeName]
+ * - Clause element type generation in [ClauseProcessor.getClauseElementTypeStr]
+ * - Default value generation in [ClauseProcessor.getDefaultValueByType]
+ * - Enum type detection in [ColumnConstraintParser.getSQLiteType]
+ *
+ * @return The fully qualified type name (e.g., "kotlin.Int", "kotlin.String"), or null if unavailable
+ *
+ * @see KSDeclaration.typeName
+ * @see ColumnConstraintParser.getSQLiteType
+ */
+inline val KSPropertyDeclaration.typeName
+ get() = type.resolve().declaration.qualifiedName?.asString()
+
+/**
+ * Extension property that retrieves a declaration's fully qualified type name.
+ *
+ * This is a convenience property for accessing a declaration's qualified name,
+ * providing a more expressive API than calling `qualifiedName?.asString()` directly.
+ *
+ * ### Examples
+ * ```kotlin
+ * // For a class declaration of: class User
+ * classDeclaration.typeName // Returns: "com.example.User"
+ *
+ * // For a type alias: typealias UserId = Long
+ * typeAliasDeclaration.typeName // Returns: "com.example.UserId"
+ *
+ * // For an enum entry: enum class Status { ACTIVE, INACTIVE }
+ * enumEntryDeclaration.typeName // Returns: "com.example.Status.ACTIVE"
+ * ```
+ *
+ * ### Usage in Processor
+ * - Resolving underlying types in type alias handling
+ * - Getting enum class names for ClauseEnum type generation
+ * - Default value generation for enum types
+ *
+ * @return The fully qualified name of the declaration, or null if unavailable
+ *
+ * @see KSPropertyDeclaration.typeName
+ */
+inline val KSDeclaration.typeName
+ get() = qualifiedName?.asString()
\ No newline at end of file
diff --git a/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ForeignKeyParser.kt b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ForeignKeyParser.kt
new file mode 100644
index 0000000..586a197
--- /dev/null
+++ b/sqllin-processor/src/main/kotlin/com/ctrip/sqllin/processor/ForeignKeyParser.kt
@@ -0,0 +1,409 @@
+/*
+ * Copyright (C) 2025 Ctrip.com.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package com.ctrip.sqllin.processor
+
+import com.google.devtools.ksp.symbol.ClassKind
+import com.google.devtools.ksp.symbol.KSAnnotation
+import com.google.devtools.ksp.symbol.KSClassDeclaration
+
+/**
+ * Parser for foreign key constraint annotations during code generation.
+ *
+ * This class processes [@ForeignKeyGroup][com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup],
+ * [@ForeignKey][com.ctrip.sqllin.dsl.annotation.ForeignKey], and
+ * [@References][com.ctrip.sqllin.dsl.annotation.References] annotations to generate
+ * the appropriate SQLite FOREIGN KEY clauses in CREATE TABLE statements.
+ *
+ * ### Processing Workflow
+ * 1. **Parse class-level annotations**: [parseGroups] extracts [@ForeignKeyGroup] metadata
+ * 2. **Parse property annotations**: [parseColumnAnnotations] processes [@ForeignKey] and [@References]
+ * 3. **Generate SQL**: [generateCodeForForeignKey] appends FOREIGN KEY clauses to CREATE TABLE
+ *
+ * ### Supported Annotation Patterns
+ *
+ * #### Pattern 1: Column-level with @References
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * data class Order(
+ * @PrimaryKey val id: Long?,
+ * @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ * val userId: Long
+ * )
+ * // Generated: userId BIGINT REFERENCES User(id) ON DELETE CASCADE
+ * ```
+ *
+ * #### Pattern 2: Table-level with @ForeignKeyGroup + @ForeignKey
+ * ```kotlin
+ * @DBRow
+ * @Serializable
+ * @ForeignKeyGroup(group = 0, tableName = "Product", trigger = Trigger.ON_DELETE_CASCADE)
+ * data class OrderItem(
+ * @PrimaryKey val id: Long?,
+ * @ForeignKey(group = 0, reference = "categoryId") val category: Int,
+ * @ForeignKey(group = 0, reference = "code") val productCode: String
+ * )
+ * // Generated: FOREIGN KEY (category,productCode) REFERENCES Product(categoryId,code) ON DELETE CASCADE
+ * ```
+ *
+ * ### Validation Rules
+ * - [@ForeignKeyGroup] groups must have unique group numbers
+ * - [@ForeignKey] annotations must reference a declared [@ForeignKeyGroup]
+ * - Properties with `ON_DELETE_SET_NULL` or `ON_UPDATE_SET_NULL` must be nullable
+ * - [@References] foreignKeys array cannot be empty
+ * - Foreign key groups must have at least one [@ForeignKey] property
+ *
+ * @author Yuang Qiao
+ * @see com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup
+ * @see com.ctrip.sqllin.dsl.annotation.ForeignKey
+ * @see com.ctrip.sqllin.dsl.annotation.References
+ * @see com.ctrip.sqllin.dsl.annotation.Trigger
+ */
+class ForeignKeyParser {
+
+ companion object {
+ const val ANNOTATION_GROUP = "com.ctrip.sqllin.dsl.annotation.ForeignKeyGroup"
+ const val ANNOTATION_REFERENCES = "com.ctrip.sqllin.dsl.annotation.References"
+ const val ANNOTATION_FOREIGN_KEY = "com.ctrip.sqllin.dsl.annotation.ForeignKey"
+ const val ANNOTATION_DEFAULT = "com.ctrip.sqllin.dsl.annotation.Default"
+ }
+
+ /**
+ * Map of group number to foreign key metadata.
+ * Populated by [parseGroups] and consumed by [generateCodeForForeignKey].
+ */
+ private val groupMap = HashMap()
+
+ /**
+ * Parses class-level [@ForeignKeyGroup] annotations and stores their metadata.
+ *
+ * This method extracts foreign key group definitions from class annotations,
+ * including the referenced table name, trigger actions, and optional constraint names.
+ * The parsed metadata is stored in [groupMap] for later use by [generateCodeForForeignKey].
+ *
+ * ### Example
+ * ```kotlin
+ * @ForeignKeyGroup(
+ * group = 0,
+ * tableName = "User",
+ * trigger = Trigger.ON_DELETE_CASCADE,
+ * constraintName = "fk_order_user"
+ * )
+ * data class Order(...)
+ * ```
+ *
+ * ### Validation
+ * - Ensures `tableName` is not blank or empty
+ * - Validates that group numbers are unique (no duplicates)
+ * - Converts [Trigger] enum values to SQL strings
+ *
+ * @param annotations Sequence of class-level annotations to process
+ * @throws IllegalArgumentException if tableName is blank or group number is duplicated
+ */
+ fun parseGroups(annotations: Sequence) {
+ annotations.forEach { annotation ->
+ if (annotation.annotationType.resolve().declaration.qualifiedName?.asString() == ANNOTATION_GROUP) {
+ var group = 0
+ var tableName = ""
+ var triggerEnumName = ""
+ var triggerSQL = ""
+ var constraintName = ""
+ annotation.arguments.forEach { argument ->
+ when (argument.name?.asString()) {
+ "group" -> group = argument.value as Int
+ "tableName" -> tableName = (argument.value as String).ifBlank {
+ throw IllegalArgumentException("The parameter `tableName` in @ForeignKeyGroup can't be blank or empty.")
+ }
+ "trigger" -> {
+ val declaration = argument.value as? KSClassDeclaration
+ if (declaration != null && declaration.classKind == ClassKind.ENUM_ENTRY) {
+ triggerEnumName = declaration.simpleName.asString()
+ if (triggerEnumName != "NULL") {
+ triggerSQL = triggerEnumName.triggerNameToSQL()
+ }
+ }
+ }
+ "constraintName" -> constraintName = argument.value as String
+ }
+ }
+
+ // Validate for duplicate groups
+ if (groupMap.containsKey(group)) {
+ throw IllegalArgumentException("Duplicate foreign key group `$group` declaration found.")
+ }
+
+ groupMap[group] = ForeignKeyEntity(
+ tableName = tableName,
+ triggerEnumName = triggerEnumName,
+ triggerSQL = triggerSQL,
+ constraintName = constraintName,
+ columns = ArrayList(),
+ references = ArrayList(),
+ )
+ }
+ }
+ }
+
+ /**
+ * Processes property-level foreign key annotations and generates SQL constraints.
+ *
+ * This method handles both [@References] (column-level) and [@ForeignKey] (table-level)
+ * annotations on properties. For @References, it directly appends the REFERENCES clause
+ * to the column definition. For @ForeignKey, it accumulates metadata in [groupMap] for
+ * later processing by [generateCodeForForeignKey].
+ *
+ * ### @References Processing
+ * Generates inline column-level foreign key constraint:
+ * ```kotlin
+ * @References(tableName = "User", foreignKeys = ["id"], trigger = Trigger.ON_DELETE_CASCADE)
+ * val userId: Long
+ * // Generated: userId BIGINT REFERENCES User(id) ON DELETE CASCADE
+ * ```
+ *
+ * ### @ForeignKey Processing
+ * Accumulates metadata for table-level constraint generation:
+ * ```kotlin
+ * @ForeignKey(group = 0, reference = "id")
+ * val userId: Long
+ * // Later generates: FOREIGN KEY (userId) REFERENCES User(id) ON DELETE CASCADE
+ * ```
+ *
+ * ### Validation
+ * - Ensures `tableName` is not blank
+ * - Validates that `foreignKeys` array is not empty
+ * - Checks that properties with SET_NULL triggers are nullable
+ * - Verifies that referenced [@ForeignKeyGroup] exists
+ *
+ * @param createSQLBuilder StringBuilder to append SQL fragments to (for @References only)
+ * @param annotations Sequence of property annotations to process
+ * @param propertyName The name of the property being processed
+ * @param isNotNull Whether the property is non-nullable
+ * @throws IllegalArgumentException if validation fails or referenced group doesn't exist
+ */
+ @Suppress("UNCHECKED_CAST")
+ fun parseColumnAnnotations(
+ createSQLBuilder: StringBuilder,
+ annotations: Sequence,
+ propertyName: String,
+ isNotNull: Boolean,
+ ) {
+ val columnReferenceEntities = ArrayList()
+ var defaultValue = ""
+ annotations.forEach { annotation ->
+ when (annotation.annotationType.resolve().declaration.qualifiedName?.asString()) {
+ ANNOTATION_REFERENCES -> {
+ val columnReferenceEntity = ColumnReferenceEntity()
+ annotation.arguments.forEach { argument ->
+ when (argument.name?.asString()) {
+ "tableName" -> columnReferenceEntity.tableName = (argument.value as String).ifBlank {
+ throw IllegalArgumentException("The parameter `tableName` can't be blank or empty.")
+ }
+ "trigger" -> {
+ val declaration = argument.value as? KSClassDeclaration
+ if (declaration != null && declaration.classKind == ClassKind.ENUM_ENTRY) {
+ val triggerEnumName = declaration.simpleName.asString()
+ if (triggerEnumName != "NULL") {
+ columnReferenceEntity.triggerSQL = triggerEnumName.triggerNameToSQL()
+ }
+ }
+ }
+ "constraintName" -> columnReferenceEntity.constraintName = argument.value as String
+ "foreignKeys" -> {
+ columnReferenceEntity.foreignKeys = (argument.value as? List)?.filter { it.isNotBlank() }
+ ?: throw IllegalArgumentException("The parameter `foreignKeys` can't be null.")
+ if (columnReferenceEntity.foreignKeys.isEmpty()) {
+ throw IllegalArgumentException("The parameter `foreignKeys` can't be empty or contain only blank values.")
+ }
+ }
+ }
+ }
+ columnReferenceEntities.add(columnReferenceEntity)
+ }
+ ANNOTATION_FOREIGN_KEY -> {
+ var group = 0
+ var reference = ""
+ annotation.arguments.forEach { argument ->
+ when (argument.name?.asString()) {
+ "group" -> group = argument.value as Int
+ "reference" -> reference = (argument.value as String).ifBlank {
+ throw IllegalArgumentException("The `reference` can't be blank.")
+ }
+ }
+ }
+ val entity = groupMap[group] ?: throw IllegalArgumentException("Foreign key group `$group` hasn't been declared with @ForeignKeyGroup annotation.")
+ with(entity) {
+ if ((triggerEnumName == "ON_DELETE_SET_NULL" || triggerEnumName == "ON_UPDATE_SET_NULL") && isNotNull) {
+ throw IllegalArgumentException("Can't use trigger `ON_DELETE_SET_NULL` or `ON_UPDATE_SET_NULL` on a non-null property in foreign key group `$group`.")
+ }
+ columns.add(propertyName)
+ references.add(reference)
+ }
+ }
+ ANNOTATION_DEFAULT -> {
+ annotation
+ .arguments
+ .find { it.name?.asString() == "value" }
+ ?.let { defaultValue = it.value as String }
+ }
+ }
+ }
+
+ with(createSQLBuilder) {
+ val hasDefaultValue = defaultValue.isNotEmpty()
+ if (hasDefaultValue) {
+ append(" DEFAULT ")
+ append(defaultValue)
+ }
+ columnReferenceEntities.forEach {
+ if (it.constraintName.isNotEmpty()) {
+ append(" CONSTRAINT ")
+ append(it.constraintName)
+ }
+ append(" REFERENCES ")
+ append(it.tableName)
+ append('(')
+ append(it.foreignKeys.first())
+ for (i in 1 ..< it.foreignKeys.size) {
+ append(',')
+ append(it.foreignKeys[i])
+ }
+ append(')')
+ if (it.triggerSQL.isNotEmpty()) {
+ when (it.triggerSQL) {
+ "ON DELETE SET NULL", "ON UPDATE SET NULL" ->
+ check(!isNotNull) { "Can't use trigger `ON_DELETE_SET_NULL` or `ON_UPDATE_SET_NULL` on a non-null property." }
+ "ON DELETE SET DEFAULT", "ON UPDATE SET DEFAULT" ->
+ check(isNotNull || hasDefaultValue) { "The column must be nullable or have a default value when using trigger 'ON DELETE SET DEFAULT' or 'ON UPDATE SET DEFAULT'" }
+ }
+ append(' ')
+ append(it.triggerSQL)
+ }
+ }
+ }
+ }
+
+ /**
+ * Generates table-level FOREIGN KEY clauses and appends them to the CREATE TABLE statement.
+ *
+ * This method processes all foreign key groups accumulated by [parseColumnAnnotations]
+ * and generates the corresponding FOREIGN KEY constraints at the table level. Each group
+ * is converted into a SQL clause of the form:
+ * ```sql
+ * FOREIGN KEY (col1, col2) REFERENCES ParentTable(ref1, ref2) ON DELETE CASCADE
+ * ```
+ *
+ * ### Example Output
+ * For a class with two foreign key groups:
+ * ```kotlin
+ * @ForeignKeyGroup(group = 0, tableName = "User", trigger = Trigger.ON_DELETE_CASCADE)
+ * @ForeignKeyGroup(group = 1, tableName = "Product", trigger = Trigger.ON_DELETE_RESTRICT)
+ * data class OrderItem(
+ * @ForeignKey(group = 0, reference = "id") val userId: Long,
+ * @ForeignKey(group = 1, reference = "id") val productId: Long
+ * )
+ * ```
+ * Generates:
+ * ```sql
+ * ,FOREIGN KEY (userId) REFERENCES User(id) ON DELETE CASCADE
+ * ,FOREIGN KEY (productId) REFERENCES Product(id) ON DELETE RESTRICT
+ * ```
+ *
+ * ### Validation
+ * - Ensures each group has at least one [@ForeignKey] property
+ * - Verifies that the number of columns matches the number of references
+ *
+ * @param createSQLBuilder StringBuilder containing the CREATE TABLE statement to append to
+ * @throws IllegalArgumentException if a foreign key group is declared but has no properties
+ */
+ fun generateCodeForForeignKey(createSQLBuilder: StringBuilder) {
+ if (groupMap.isEmpty())
+ return
+ with(createSQLBuilder) {
+ groupMap.forEach { (groupNum, entity) ->
+ // Validate entity has columns
+ if (entity.columns.isEmpty()) {
+ throw IllegalArgumentException("Foreign key group `$groupNum` was declared but no columns reference it with @ForeignKey annotation.")
+ }
+
+ // Validate columns and references match
+ if (entity.columns.size != entity.references.size) {
+ throw IllegalArgumentException("Internal error: columns and references size mismatch in foreign key group `$groupNum`.")
+ }
+
+ if (entity.constraintName.isNotEmpty()) {
+ append(",CONSTRAINT ")
+ append(entity.constraintName)
+ append(' ')
+ } else {
+ append(',')
+ }
+ append("FOREIGN KEY (")
+
+ append(entity.columns.first())
+ for (i in 1 ..< entity.columns.size) {
+ append(',')
+ append(entity.columns[i])
+ }
+
+ append(") REFERENCES ")
+ append(entity.tableName)
+ append('(')
+ append(entity.references.first())
+ for (i in 1 ..< entity.references.size) {
+ append(',')
+ append(entity.references[i])
+ }
+ append(')')
+
+ if (entity.triggerSQL.isNotEmpty()) {
+ append(' ')
+ append(entity.triggerSQL)
+ }
+ }
+ }
+ }
+
+ /**
+ * Internal data class representing a single foreign key constraint group.
+ *
+ * This class stores metadata for a foreign key constraint, including the
+ * referenced table, trigger actions, and the mapping between local columns
+ * and referenced columns.
+ *
+ * @property tableName The name of the parent table being referenced
+ * @property triggerEnumName The enum name of the trigger (e.g., "ON_DELETE_CASCADE")
+ * @property triggerSQL The SQL representation of the trigger (e.g., "ON DELETE CASCADE")
+ * @property constraintName Optional name for the constraint
+ * @property columns List of local column names participating in this foreign key
+ * @property references List of referenced column names in the parent table (parallel to [columns])
+ */
+ private class ForeignKeyEntity(
+ val tableName: String,
+ val triggerEnumName: String,
+ val triggerSQL: String,
+ val constraintName: String,
+ val columns: MutableList,
+ val references: MutableList,
+ )
+
+ private class ColumnReferenceEntity(
+ var tableName: String = "",
+ var triggerSQL: String = "",
+ var constraintName: String = "",
+ var foreignKeys: List = emptyList(),
+ )
+}
\ No newline at end of file
| | |